NIC Virtualization in Lenovo Flex System Fabric Solutions

Size: px
Start display at page:

Download "NIC Virtualization in Lenovo Flex System Fabric Solutions"

Transcription

1 Front cover NIC Virtualization in Lenovo Flex System Fabric Solutions Last Update: September 2014 Introduces NIC virtualization concepts and technologies Describes UFP and vnic deployment scenarios Provides UFP and vnic configuration examples Useful knowledge for networking professionals Scott Irwin Scott Lorditch Matt Slavin Ilya Krutov

2

3 NIC Virtualization in Lenovo Flex System Fabric Solutions September 2014 SG

4 Note: Before using this information and the product it supports, read the information in Notices on page v. Last update on September 2014 This edition applies to: Networking Operating System 7.8 Flex System Fabric CN Gb Converged Scalable Switch Flex System Fabric EN4093R 10Gb Scalable Switch Flex System Embedded 10Gb Virtual Fabric Adapter Flex System CN Gb Virtual Fabric Adapter Flex System CN4054R 10Gb Virtual Fabric Adapter Copyright Lenovo All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract

5 Contents Notices v Trademarks vi Preface vii The team who wrote this book vii Comments welcome viii Do you have the latest version? ix Chapter 1. I/O module and NIC virtualization features in the Flex System environment Overview of Flex System network virtualization Introduction to NIC virtualization vnic based NIC virtualization Unified Fabric Port-based NIC virtualization Comparing vnic modes and UFP modes Introduction to I/O module virtualization Introduction to vlag Introduction to stacking Introduction to SPAR Easy Connect Q-in-Q solutions Introduction to the Failover feature Introduction to converged fabrics FCoE iscsi iscsi versus FCoE Chapter 2. Flex System networking architecture and Fabric portfolio Enterprise Chassis I/O architecture Flex System Fabric I/O modules Flex System Fabric EN4093R 10Gb Scalable Switch Flex System Fabric CN Gb Converged Scalable Switch Flex System Fabric SI4093 System Interconnect Module I/O modules and cables Flex System Virtual Fabric adapters Embedded 10Gb Virtual Fabric Adapter Flex System CN4054/CN4054R 10Gb Virtual Fabric Adapters Chapter 3. NIC virtualization considerations on the switch side Virtual Fabric vnic solution capabilities Virtual Fabric mode vnic Switch Independent mode vnic Unified Fabric Port feature UFP Access and Trunk modes UFP Tunnel mode UFP FCoE mode UFP Auto mode UFP vport considerations Compute node NIC to I/O module connectivity mapping Embedded 10 Gb VFA (LOM): Mezzanine Copyright Lenovo All rights reserved. iii

6 3.3.2 Flex System CN4054/CN4054R 10Gb VFA: Mezzanine Flex System CN4054/CN4054R 10Gb VFA: Mezzanine 1 and Flex System x222 Compute Node Chapter 4. NIC virtualization considerations on the server side Enabling virtual NICs on the server via UEFI Getting in to the virtual NIC configuration section of UEFI Initially enabling virtual NIC functionality via UEFI Special settings for the different modes of virtual NIC via UEFI Setting the Emulex virtual NIC settings back to factory default Enabling virtual NICs via Configuration Patterns Using physical and virtual NICs in the operating systems Introduction to teaming/bonding on the server Operating system side teaming/bonding and upstream network requirements Physical NIC connections and logical enumeration Chapter 5. Flex System NIC virtualization deployment scenarios Introduction to deployment examples UFP mode virtual NIC and Layer 2 Failover Components Topology Use Cases Configuration Confirming operation of the environment UFP mode virtual NIC with vlag and FCoE Components Topology Use cases Configuration Confirming operation of the environment pnic and vnic Virtual Fabric modes with Layer 2 Failover Components Topologies Use cases Configurations Verifying operation Switch Independent mode with SPAR Components Topology Use cases Configuration Verifying operation Abbreviations and acronyms Related publications Lenovo Press iv NIC Virtualization in Lenovo Flex System Fabric Solutions

7 Notices Lenovo may not offer the products, services, or features discussed in this document in all countries. Consult your local Lenovo representative for information on the products and services currently available in your area. Any reference to a Lenovo product, program, or service is not intended to state or imply that only that Lenovo product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any Lenovo intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any other product, program, or service. Lenovo may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: Lenovo (United States), Inc Think Place - Building One Morrisville, NC U.S.A. Attention: Lenovo Director of Licensing LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. Lenovo may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. The products described in this document are not intended for use in implantation or other life support applications where malfunction may result in injury or death to persons. The information contained in this document does not affect or change Lenovo product specifications or warranties. Nothing in this document shall operate as an express or implied license or indemnity under the intellectual property rights of Lenovo or third parties. All information contained in this document was obtained in specific environments and is presented as an illustration. The result obtained in other operating environments may vary. Lenovo may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Any references in this publication to non-lenovo Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this Lenovo product, and use of those Web sites is at your own risk. Any performance data contained herein was determined in a controlled environment. Therefore, the result obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Copyright Lenovo All rights reserved. v

8 Trademarks Lenovo, the Lenovo logo, and For Those Who Do are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. These and other Lenovo trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by Lenovo at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of Lenovo trademarks is available on the Web at The following terms are trademarks of Lenovo in the United States, other countries, or both: Blade Network Technologies BladeCenter BNT Flex System Lenovo Omni Ports Lenovo(logo) System x VMready vnic The following terms are trademarks of other companies: Intel, Xeon, and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Hyper-V, Microsoft, Windows, Windows Server, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. vi NIC Virtualization in Lenovo Flex System Fabric Solutions

9 Preface The deployment of server virtualization technologies in data centers requires significant efforts in providing sufficient network I/O bandwidth to satisfy the demand of virtualized applications and services. For example, every virtualized system can host several dozen applications and services. Each of these services requires certain bandwidth (or speed) to function properly. Furthermore, because of different network traffic patterns that are relevant to different service types, these traffic flows can interfere with each other. They can lead to serious network problems, including the inability of the service to perform its functions. The NIC virtualization in Lenovo Flex System Fabric solutions addresses these issues. The solutions are based on the Flex System Enterprise Chassis with a 10 Gbps Converged Enhanced Ethernet infrastructure. This infrastructure is built on Flex System Fabric CN4093 and EN4093R 10 Gbps Ethernet switch modules, and Flex System Fabric SI4093 Switch Interconnect modules in the chassis and the Emulex Virtual Fabric Adapters in each compute node. This book introduces NIC virtualization concepts and technologies, describes their deployment scenarios, and provides configuration examples that use Lenovo Networking OS technologies that are combined with the Emulex Virtual Fabric adapters. This book is for networking professionals who want to learn how to implement NIC virtualization solutions and switch interconnect technologies on Flex System by using the Unified Fabric Port (UFP) mode, Switch Independent mode, and Virtual Fabric mode. This book assumes that the reader has basic knowledge of the networking concepts and technologies, including OSI model, Ethernet LANs, Spanning Tree protocol, VLANs, VLAN tagging, uplinks, trunks, and static and dynamic (LACP) link aggregation. The team who wrote this book This document is produced by the following subject matter experts working in the Lenovo offices in Morrisville, NC, USA. Ilya Krutov is a Project Leader at Lenovo Press. He manages and produces pre-sale and post-sale technical publications on various IT topics, including x86 rack and blade servers, server operating systems, virtualization and cloud, networking, storage, and systems management. Ilya has more than 15 years of experience in the IT industry, backed by professional certifications from Cisco Systems, IBM, and Microsoft. During his career, Ilya has held a variety of technical and leadership positions in education, consulting, services, technical sales, marketing, channel business, and programming. He has written more than 200 books, papers, and other technical documents. Ilya has a Specialist's degree with honors in Computer Engineering from the Moscow State Engineering and Physics Institute (Technical University). Copyright Lenovo All rights reserved. vii

10 Scott Irwin is a Consulting Systems Engineer (CSE) with Lenovo Networking, formerly from IBM and Blade Network Technologies (BNT ). His networking background spans well over 16 years as a Customer Support Escalation Engineer and a Customer-facing Field Systems Engineer. His focus is on deep customer troubleshooting and his responsibilities include supporting customer proof of concepts, assistance with paid installations and training, and supporting pre- and post-sales activities with customers in the Public Sector, High Frequency Trading, Service Provider, Midmarket, and Enterprise markets. Scott Lorditch is a Consulting System Engineer for Lenovo. He performs network architecture assessments and develops designs and proposals for solutions that involve Lenovo Networking products. He also developed several training and lab sessions for technical and sales personnel. Scott joined IBM as part of the acquisition of Blade Network Technologies and joined Lenovo as part of the System x acquisition from IBM. Scott spent almost 20 years working on networking in various industries, as a senior network architect, a product manager for managed hosting services, and manager of electronic securities transfer projects. Scott holds a BS degree in Operations Research with a specialization in computer science from Cornell University. Matt Slavin is a Consulting Systems Engineer for Lenovo Networking, based out of Tulsa, Oklahoma. He provides network consulting skills to the Americas. He has over 30 years of hands-on systems and network design, installation, and troubleshooting experience. Most recently, he has focused on data center networking, where he is leading client efforts to adopt new technologies into day-to-day operations. Matt joined Lenovo through the acquisition of the IBM System x team. Before that acquisition, he worked at some of the top systems and networking companies in the world. Comments welcome Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or in one of the following ways: Use the online feedback form found at the web page for this document: Send your comments in an to: comments@lenovopress.com viii NIC Virtualization in Lenovo Flex System Fabric Solutions

11 Do you have the latest version? We update our books and papers from time to time, so check whether you have the latest version of this document by clicking the Check for Updates button on the front page of the PDF. Pressing this button will take you to a web page that will tell you if you are reading the latest version of the document and give you a link to the latest if needed. While you re there, you can also sign up to get notified via whenever we make an update. Preface ix

12 x NIC Virtualization in Lenovo Flex System Fabric Solutions

13 1 Chapter 1. I/O module and NIC virtualization features in the Flex System environment This chapter introduces the various virtualization features that are available with certain I/O Modules and converged network adapters (CNAs) in the Flex System environment. The primary focus of this publication is the EN4093R, CN4093, and the SI4093, with related server-side converged network adapter (CNA) or Virtual Fabric Adapter (VFA) virtualization features. Although other I/O modules are available for the Flex System Enterprise Chassis environment, those other I/O modules do not support the virtualization features discussed in this document and are not covered here (unless otherwise noted). This chapter includes the following topics: Overview of Flex System network virtualization Introduction to NIC virtualization Introduction to I/O module virtualization Introduction to converged fabrics Copyright Lenovo All rights reserved. 1

14 1.1 Overview of Flex System network virtualization The term virtualization can mean many different things to different people, and in different contexts. For example, in the server world, the term is often associated with taking bare metal platforms and adding a layer of software (referred to as a hypervisor) that permits multiple virtual machines (VMs) to run on that single physical platform, with each VM thinking it owns the entire hardware platform. In the network world, there are many different concepts of virtualization. Such things as overlay technologies, with which a user can run one network on top of another network, usually with the goal of hiding the complexities of the underlying network (often referred to as overlay networking). Another form of network virtualization is Openflow technology, which de-couples a switches control plane from the switch, and allows the switching path decisions to be made from a central control point. There are other forms of virtualization, such as cross chassis aggregation (also known as cross-switch aggregation), virtualized NIC technologies, and converged fabrics. This publication is focused on the latter set of cross chassis aggregation, virtualized NIC technologies, and converged fabrics, specifically, the following set of features: Converged fabrics: Fibre Channel over Ethernet (FCoE) and Internet Small Computer Systems Interconnect (iscsi) Virtual Link Aggregation (vlag): A form of cross switch aggregation Stacking: Virtualizing the management plane and the switching fabric Switch Partitioning (SPAR): Masking the I/O Module from the host and upstream network Easy Connect Q-in-Q solutions: More ways to mask the I/O Modules from connecting devices NIC virtualization - Allowing a single physical 10 GbE NIC to represent multiple NICs to the host OS Although we introduce all of these topics in this chapter, the primary focus of this publication is regarding how NIC virtualization integrates into the various other features and the surrounding customer environment. The following specific NIC virtualization features are described: Virtual Fabric mode: Also known as vnic Virtual Fabric mode, which includes Dedicated Uplink Mode (default) and Shared Uplink Mode (optional) operations. Switch Independent Mode: Also known as vnic Switch Independent mode. Unified Fabric Port: Also known as Unified Fabric Protocol (UFP) - All modes. Important: The term vnic can be used generically for all virtual NIC technologies, or as a vendor-specific term. For example, VMware calls the virtual NIC that is inside a VM a vnic. Unless otherwise noted, the use of the term vnic in this publication is referring to a specific feature that is available on the Flex System I/O modules and Emulex CNAs inside physical hosts. In a related fashion, the term vport has multiple connotations; for example, used by Microsoft for their Hyper-V environment. Unless otherwise noted, the use of the term vport in this publication refers to the UFP feature on the Flex System I/O modules and Emulex CNAs inside physical hosts. 2 NIC Virtualization in Lenovo Flex System Fabric Solutions

15 Important: All I/O module features that are described in this paper are based on the latest available firmware at the time of this writing (Networking OS 7.8 for the EN4093R, CN4093, and SI4093 modules). 1.2 Introduction to NIC virtualization This section introduces the two primary types of NIC virtualization (vnic and UFP) that are available on the Flex System I/O modules and adapters and the various subelements of these virtual NIC technologies. The deployment of server virtualization technologies in data centers requires significant efforts to provide sufficient network I/O bandwidth (or speed) to satisfy the demand of virtualized applications and services. For example, every virtualized system can host several dozen network applications and services, and each of these services requires a certain bandwidth to function properly. Also, because of different network traffic patterns that are relevant to different service types, these traffic flows might interfere with each other. This interference can lead to serious network problems, including the inability of the service to perform its functions. Providing sufficient bandwidth and isolation to virtualized applications in a 1 Gbps network infrastructure might be challenging for blade-based deployments where the number of physical I/O ports per compute node is limited. For example, a maximum of 12 physical ports per single-wide compute node (up to six Ethernet ports per adapter) can be used for network connectivity. With 1 GbE, a total network bandwidth of 12 Gb per compute node is available for Gigabit Ethernet infrastructures, which leaves no room for future growth. In addition, traffic flows are isolated on a physical port basis. Also, the bandwidth per interface is static with a maximum bandwidth of 1 Gb per flow, which limits the flexibility of bandwidth usage. Flex System Fabric solutions address these issues by increasing the number of available Ethernet ports and providing more flexibility in allocating the available bandwidth to meet specific application requirements. By virtualizing a 10 Gbps NIC, its resources can be divided into multiple logical instances or virtual NICs. Each virtual NIC appears as a regular, independent NIC to the server operating system or hypervisor, and each virtual NIC uses a portion of the overall bandwidth of the physical NIC. For example, a NIC partition with a maximum bandwidth of 4 Gbps appears to the host applications as a physically distinct 4 Gbps Ethernet adapter. Also, the NIC partitions provide traffic forwarding and port isolation. The virtual NIC technologies that are described for the I/O module here are all directly tied to the Emulex CNA offerings for the Flex System environment, and documented in 2.3, Flex System Virtual Fabric adapters on page vnic based NIC virtualization vnic is the original virtual NIC technology that is used in the BladeCenter 10 Gb Virtual Fabric Switch Module. It was brought forward into the PureFlex System environment to allow customers that have standardized on vnic to still use it with the PureFlex System solutions. Chapter 1. I/O module and NIC virtualization features in the Flex System environment 3

16 vnic has the following primary modes: Virtual Fabric mode Virtual Fabric mode offer advanced virtual NICs to servers and it requires support on the switch side. In Virtual Fabric mode, the Virtual Fabric Adapter (VFA) in the compute node communicates with the Flex System switch to obtain vnic parameters (by using DCBX). A special tag is added within each data packet and is later removed by the NIC and switch for each vnic group to maintain separation of the virtual data paths. In Virtual Fabric Mode, you can change the bandwidth allocations through the switch user interfaces without requiring a reboot of the server. vnic bandwidth allocation and metering are performed by both the switch and the VFA. In such a case, a bidirectional virtual channel of an assigned bandwidth is established between them for every defined vnic. Switch Independent mode Switch Independent Mode offers virtual NICs to server with no special I/O module side configuration. It extends the existing customer VLANs to the virtual NIC interfaces. The IEEE 802.1Q VLAN tag is essential to the separation of the vnic groups by the NIC adapter or driver and the switch. The VLAN tags are added to the packet by the applications or drivers at each endstation rather than by the switch. vnic bandwidth allocation and metering are performed only by VFA. The switch is unaware that the 10 GbE NIC is seen as multiple logical NICs in the OS. In such a case, a unidirectional virtual channel is established where the bandwidth management is only performed for the outgoing traffic on a VFA side (server-to-switch). The incoming traffic (switch-to-server) uses the all available physical port bandwidth because there is no metering that is performed on either the VFA or a switch side. Virtual Fabric mode vnic has the following submodes: vnic Virtual Fabric - Dedicated Uplink Mode: Provides a Q-in-Q tunneling action for each vnic group Each vnic group must have its own dedicated uplink path out Any vnics in one vnic group cannot communicate with vnics in any other vnic group without first exiting to the upstream network with Layer 3 routing vnic Virtual Fabric - Shared Uplink Mode: Each vnic group provides a single VLAN for all vnics in that group Each vnic group must be a unique VLAN (cannot use same VLAN on more than a single vnic group) Servers cannot use tagging when Shared Uplink Mode is enabled As with vnics in Dedicate Uplink Mode, any vnics in one vnic group cannot talk with vnics in any other vnic group without first exiting to the upstream network with Layer 3 routing For more information about enabling and configuring these modes, see Chapter 4, NIC virtualization considerations on the server side on page 67, and Chapter 5, Flex System NIC virtualization deployment scenarios on page NIC Virtualization in Lenovo Flex System Fabric Solutions

17 1.2.2 Unified Fabric Port-based NIC virtualization Unified Fabric Port (UFP) is the current direction of Lenovo NIC virtualization and provides a more feature-rich solution compared to the original vnic Virtual Fabric mode. As with Virtual Fabric mode vnic, UFP allows carving up a single 10 Gb port into four virtual NICs (called vports in UFP). UFP also has the following modes that are associated with it: Tunnel mode Provides Q-in-Q mode, where the vport is customer VLAN-independent (which is similar to vnic Virtual Fabric Dedicated Uplink Mode). Trunk mode Provides a traditional 802.1Q trunk mode (multi-vlan trunk link) to the virtual NIC (vport) interface; that is, permits host side tagging. Access mode Provides a traditional access mode (single untagged VLAN) to the virtual NIC (vport) interface, which is similar to a physical port in access mode. FCoE mode Provides FCoE functionality to the vport. Auto-VLAN mode Auto VLAN creation for Qbg and VMready environments. Only one vport (vport 2) per physical port can be bound to FCoE. If FCoE is not wanted, vport 2 can be configured for one of the other modes. For more information about enabling and configuring these modes, see Chapter 4, NIC virtualization considerations on the server side on page 67, and Chapter 5, Flex System NIC virtualization deployment scenarios on page Comparing vnic modes and UFP modes As a rule, if a customer wants virtualized NICs in the PureFlex System environment, UFP is usually the preferred solution because all of the new feature development is going into UFP. If a customer has standardized the original vnic Virtual Fabric mode, they can still continue to use that mode in a fully supported fashion. If a customer does not want any of the virtual NIC functionality that is controlled by the I/O module (controlled and configured only on the server side), Switch Independent mode vnic is the solution of choice. This mode has the advantage of being I/O module-independent, such that any upstream I/O module can be used. Some of the disadvantages to this mode are that bandwidth restrictions can be enforced only from the server side (not the I/O module side) and to change bandwidth requires a reboot of the server (bandwidth control for the other virtual NIC modes that are described here are changed from the switch side, enforce bandwidth restrictions bidirectionally, and can be changed dynamically, with no reboot required). Chapter 1. I/O module and NIC virtualization features in the Flex System environment 5

18 Table 1-1 shows some of the items that can affect the decision-making process. Table 1-1 Attributes of virtual NIC options Capability Virtual Fabric vnic mode Dedicated uplink Shared uplink Switch independent Mode vnic UFP Requires support in the I/O module Yes Yes No Yes Requires support in the NIC/CNA Yes Yes Yes Yes Supports adapter transmit rate control Yes Yes Yes Yes Supports I/O module transmit rate control Yes Yes No Yes Supports changing rate without restart of node Yes Yes No Yes Requires a dedicated uplink path per vnic group or vport Yes No No Yes for vports in Tunnel mode Support for node OS-based tagging Yes No Yes Yes Support for failover per vnic/ group/ufp vport Yes Yes No Yes Support for more than one uplink path per vnic/vport group Supported regardless of the model of the Flex System I/O module No Yes Yes Yes for vports in Trunk and Access modes No No Yes No Supported by vlag No No Yes Yes for uplinks out of the I/O Module carrying vport traffic Supported by SPAR No No Yes No Supported by stacking Yes Yes Yes Yes Supported by SI4093 No No Yes Yes Supported by EN4093 Yes Yes Yes Yes Supported by CN4093 Yes Yes Yes Yes For more information about virtual NIC operational characteristics from the switch side, see Chapter 3, NIC virtualization considerations on the switch side on page 47. For more information about virtual NIC operational characteristics from the server side, see Chapter 4, NIC virtualization considerations on the server side on page Introduction to I/O module virtualization This section provides brief overview of Flex System I/O module virtualization technologies Introduction to vlag In its simplest terms, vlag is a technology that is designed to enhance traditional Ethernet link aggregations (sometimes referred to generically as Portchannels or Etherchannels). 6 NIC Virtualization in Lenovo Flex System Fabric Solutions

19 Note: vlag is not a form of aggregation in its own right; instead, it is an enhancement to aggregations. Under current IEEE specifications, an aggregation is still defined as a bundle of similar links between two (and only two devices) bound together to operate as a single logical link. By today s standards-based definitions, you cannot create an aggregation on one device and have these links of that aggregation connect to more than a single device on the other side of the aggregation. The use of only two devices in this fashion limits the ability to offer certain robust designs. Although the standards bodies are working on a solution that provides split aggregations across devices, most vendors developed their own versions of this multi-chassis aggregation. For example, Cisco has virtual Port Channel (vpc) on NX OS products and Virtual Switch System (VSS) on the 6500 IOS products. Lenovo offers virtual Link Aggregation (vlag) on many of the Lenovo Top of Rack (ToR) solutions and on the EN4093R and CN4093 Flex System I/O modules. The primary goal of virtual link aggregation is to overcome the limit that is imposed by the current standards-based aggregation, and provide a distributed aggregation across a pair of switches instead of a single switch. Doing so results in a reduction of single points of failure, while still maintaining a loop-free, non-blocking environment. Figure 1-1 shows an example of how vlag can create a single common uplink out of a pair of embedded I/O Modules. This configuration creates a non-looped path with no blocking links, which offers the maximum amount of bandwidth for the links and no single point of failure. Multi-chassis Aggregation (vlag, vpc, mlag, etc) ToR Switch 1 I/O Module 1 NIC 1 Upstream Network Chassis Compute Node ToR Switch 2 I/O Module 2 NIC 2 Multi-chassis Aggregation (vlag) Figure 1-1 Non-looped design that uses multi-chassis aggregation on both sides Although this vlag-based design is considered the most optimal, not all I/O module virtualization options support this topology; for example, Virtual Fabric vnic mode or SPAR is not supported by vlag. Another potentially limiting factor with vlag (and other such cross-chassis aggregations, such as vpc and VSS) is that it supports only a pair of switches that act as one for this cross-chassis aggregation, and not more than two. If you want to split an aggregation across more than two switches, stacking might be an option to consider. Chapter 1. I/O module and NIC virtualization features in the Flex System environment 7

20 1.3.2 Introduction to stacking By using stacking, you can take up to eight physical I/O modules and treat them as a single logical switch from a port usage and management perspective. Ports on different I/O modules in the stack can be part of a common aggregation and you log in to only a single IP address to manage all I/O modules in the stack. For devices that are attaching to the stack, the stack looks and acts like a single large switch. Stacking is supported on the EN4093R and CN4093 I/O modules. It is provided by reserving a group of uplinks into stacking links and creating a ring of I/O modules with these links. The ring design ensures that the loss of a single link or single I/O module in the stack does not lead to a disruption of the stack. Before v7.7 releases of code, it was possible to stack the EN4093R only into a common stack of like model I/O modules. However, in v7.7 and later code, support was added to add a pair CN4093s into a hybrid stack of EN4093s to add Fibre Channel Forwarder (FCF) capability into the stack. The limit for this hybrid stacking is a maximum of 6x EN4093Rs and 2x CN4093s in a common stack. Important: When the EN4093R and CN4093 are used in hybrid stacking, only the CN4093 can act as a stack master or stack backup master for the stack. Stacking the Flex System chassis I/O modules with Lenovo Top of Rack switches that also support stacking is not allowed. Connections from a stack of Flex System chassis I/O modules to upstream switches can be made with normal single or aggregated connections, including the use of vlag/vpc on the upstream switches to connect links across stack members into a common non-blocking fabric between the stack and the Top of Rack switches. An example of four I/O modules in a highly available stacking design is shown in Figure 1-2 on page 9. 8 NIC Virtualization in Lenovo Flex System Fabric Solutions

21 Multi-chassis Aggregation (vlag, vpc, mlag, etc) ToR Switch 1 I/O Module 1 I/O Module 2 Chassis 1 NIC 1 Compute Node NIC 2 Upstream Network ToR Switch 2 I/O Module 1 I/O Module 2 Chassis 2 NIC 1 Compute Node NIC 2 Stacking Figure 1-2 Example of stacking in the Flex System environment This example shows a design with no single points of failures (via a stack of four I/O modules in a single stack) and a pair of upstream vlag/vpc connected switches. One of the potential limitations of the current implementation of stacking is that if an upgrade of code is needed, a reload of the entire stack must occur. Because upgrades are uncommon and should be scheduled for non-production hours anyway, a single stack design is efficient and acceptable. However, some customers do not want to have any downtime (scheduled or otherwise); therefore, a single stack design is not an acceptable solution. For these users that still want to make the most use of stacking, a two-stack design might be an option. This design features stacking a set of I/O modules in bay 1 into one stack, and a set of I/O modules in bay 2 in a second stack. The primary advantage to a two-stack design is that each stack can be upgraded one at a time, with the running stack maintaining connectivity for the compute nodes during the upgrade and reload of the other stack. The downside of the two-stack design is that traffic that is flowing from one stack to another stack must go through the upstream network to reach the other stack. Stacking might not be suitable for all customers. However, if it is wanted, it is another tool that is available for building a robust infrastructure by using the Flex System I/O modules. Chapter 1. I/O module and NIC virtualization features in the Flex System environment 9

22 1.3.3 Introduction to SPAR Switch partitioning (SPAR) is a feature that, among other things, allows a physical I/O module to be divided into multiple logical switches. After SPAR is configured, ports within a specific SPAR group can communicate only with each other. Ports that are members of different SPAR groups on the same I/O module cannot communicate directly with each other, without going outside the I/O module. The EN4093R, CN4093, and the SI4093 I/O Modules support SPAR. SPAR features the following modes of operation: Pass-through domain mode (also known as transparent mode) This mode of SPAR uses a Q-in-Q function to encapsulate all traffic that is passing through the switch in a second layer of VLAN tagging. This mode is the default mode when SPAR is enabled and is VLAN independent owing to this Q-in-Q operation. It passes tagged and untagged packets through the SPAR session without looking at or interfering with any customer assigned tag. SPAR pass-through mode supports passing FCoE packets to an upstream FCF, but without FIP Snooping within the SPAR group in pass-through domain mode. Local domain mode This mode is not VLAN independent and requires a user to create any required VLANs in the SPAR group. Currently, there is a limit of 256 VLANs in Local domain mode. Support is available for FIP Snooping on FCoE sessions in Local Domain mode. Unlike pass-through domain mode, Local Domain mode provides strict control of end host VLAN isolation. Consider the following points regarding SPAR: SPAR is disabled by default on the EN4093R and CN4093. SPAR is enabled by default on SI4093, with all base licensed internal and external ports defaulting to a single pass-through SPAR group. This default SI4093 configuration can be changed, if wanted. Any port can be a member of only a single SPAR group at one time. Only a single uplink path is permissible per SPAR group (can be a single link, a single static aggregation, or a single LACP aggregation). This SPAR enforced restriction ensures that no network loops are possible with ports in a SPAR group. SPAR cannot be used with UFP or Virtual Fabric vnic as of this writing. Switch Independent Mode vnic is supported by SPAR. UFP support is slated for a future release. Up to eight SPAR groups per I/O module are supported. This number might be increased in a future release. SPAR is not supported by vlag, stacking, or tagpvid-ingress features. SPAR can be a useful solution in environments where simplicity is paramount. 10 NIC Virtualization in Lenovo Flex System Fabric Solutions

23 1.3.4 Easy Connect Q-in-Q solutions The Easy Connect concept (which is often referred to as Easy Connect mode or Transparent mode) is not a specific feature. Instead, it is a way of using one of four different existing features to attempt to minimize ongoing I/O module management requirements. The primary goal of Easy Connect is to make an I/O module transparent to the hosts and the upstream network they must access, which reduces the management requirements for I/O Modules in an Easy Connect mode. There are several features that can be used to accomplish an Easy Connect solution. The following common aspects of Easy Connect solutions are available: At the heart of Easy Connect is some form of Q-in-Q tagging to mask packets that are traveling through the I/O module. This tagging is a fundamental requirement of any Easy Connect solution with which the attached hosts and upstream network can communicate by using any VLAN (tagged or untagged) and the I/O module passes those packets through to the other side of the I/O module by wrapping them in an outer VLAN tag. It then removes that outer VLAN tag as the packet exits the I/O module, which makes the I/O module VLAN independent. This Q-in-Q operation is what removes the need to manage VLANs on the I/O module, which is usually one of the larger ongoing management requirements of a deployed I/O module. Pre-creating an aggregation of the uplinks (in some cases, all of the uplinks) to remove the possibility of loops (if all uplinks are not used, any unused uplinks/ports should be disabled to ensure that loops are not possible). Optionally disabling spanning-tree so the upstream network does not receive any spanning-tree BPDUs. This function is especially important in the case of upstream devices that shut down a port if BPDUs are received, such as a Cisco FEX device, or an upstream switch that is running some form of BPDU guard. After it is configured, an I/O module in Easy Connect mode does not require on-going configuration changes as a customer adds and removes VLANs to the hosts and upstream network. In essence, Easy Connect turns the I/O module into a VLAN independent port aggregator, with support for growing up to the maximum bandwidth of the product (for example, add upgrade Feature on Demand [FoD] keys to the I/O module to increase the 10 Gb links to Compute Nodes and 10 Gb and 40 Gb links to the upstream networks). The following primary methods are used for deploying an Easy Connect solution: Use an I/O module that defaults to a form of Easy Connect: For customers that want an Easy Connect type of solution that is immediately ready for use (zero touch I/O module deployment), the SI4093 provides this function by default. The SI4093 accomplishes this function by having the following factory default configuration: All base licensed internal and external ports are put into a single SPAR group. All uplinks are put into a single common LACP aggregation and the LACP suspend-port feature is enabled. The failover feature is enabled on the common LACP key. No spanning-tree support (the SI4093 is designed to never permit more than a single uplink path per SPAR, so it cannot create a loop and does not support spanning-tree). For customers that want the option to use advanced features but also want an Easy Connect mode solution, the EN4093R and CN4093 offer configurable options. These options can make them transparent to the attaching Compute Nodes and upstream network switches while maintaining the option of changing to more advanced modes of configuration, when needed. Chapter 1. I/O module and NIC virtualization features in the Flex System environment 11

24 The SI4093 accomplishes this task by defaulting to the SPAR feature in pass-through mode, which puts all compute node ports and all uplinks into a common Q-in-Q group. For the EN4093R and CN4093, there are several features that can be implemented to accomplish this Easy Connect support. The primary difference between these I/O modules and the SI4093 is that you must first perform a small set of configuration steps to set up the EN4093R and CN4093 into an Easy Connect mode after which minimal management of the I/O module is required. For these I/O modules, this Easy Connect mode can be configured by using one of the following features: SPAR feature that is default on the SI4093 and can be configured on the EN4093R and CN4093 Tagpvid-ingress Configure vnic Virtual Fabric Dedicated Uplink Mode Configure UFP vport tunnel mode In general, all of these features provide this Easy Connect functionality, with each having some pros and cons. For example, if you want to use Easy Connect with vlag, you should use the tagpvid-ingress mode or the UFP vport tunnel mode (SPAR and Virtual Fabric vnic do not permit the vlag ISL). However, if you want to use Easy Connect with FCoE today, you cannot use tagpvid-ingress and must use a different form of Easy connect, such as the vnic Virtual Fabric Dedicated Uplink Mode or UFP tunnel mode (SPAR pass-through mode allows FCoE but does not support FIP snooping, which might or might not be a concern for some customers). As an example of how Easy Connect works (in all Easy Connect modes), consider the tagpvid-ingress Easy Connect mode operation that is shown in Figure 1-3 on page 13. When all internal ports and the wanted uplink ports are placed into a common PVID/Native VLAN (4091 in this example) and tagpvid-ingress is enabled on these ports (with any wanted aggregation protocol on the uplinks that are required to match the other end of those links), all ports with a matching Native or PVID setting on this I/O module are part of a single Q-in-Q tunnel. The Native/PVID VLAN on the port acts as the outer tag and the I/O module switches traffic that is based on this outer tag VLAN. The inner customer tag rides through the fabric that is encapsulated on this Native/PVID VLAN to the destination port (or ports) in this tunnel. The outer tag is then stripped off as it exits the I/O module, which re-exposes the original customer-facing tag (or no tag) to the device that is attaching to that egress port. 12 NIC Virtualization in Lenovo Flex System Fabric Solutions

25 Figure 1-3 Packet flow with Easy Connect In all modes of Easy Connect, local switching that is based on a destination MAC address is still used. Consider the following points about what form of Easy Connect mode makes the most sense for a specific situation: For users that require virtualized NICs, are already using vnic Virtual Fabric mode, and are more comfortable staying with it, vnic Virtual Fabric dedicated uplink mode might be the best solution for Easy Connect functionality. For users that require virtualized NICs and have no particular opinion on which mode of virtualized NIC they prefer, UFP tunnel mode is the best choice for Easy Connect mode because the UFP feature is the future direction of virtualized NICs in the Flex System I/O module solutions. For users who are planning to use the vlag feature, UFP tunnel mode or tagpvid-ingress mode forms of Easy Connect are necessary (vnic Virtual Fabric mode and SPAR Easy Connect modes do not work with the vlag feature). For users that do not need vlag or virtual NIC functionality, SPAR is a simple and clean solution to implement as an Easy Connect solution Introduction to the Failover feature Failover, which is some times referred to as Layer 2 Failover or Trunk Failover, is not a virtualization feature in its own right, but can play an important role when NICs on a server are making use of teaming/bonding (forms of NIC virtualization in the OS). Failover is important in an embedded environment, such as in a Flex System chassis. Chapter 1. I/O module and NIC virtualization features in the Flex System environment 13

26 When NICs are teamed or bonded in an operating system, they must know when a NIC cannot reach the upstream network so they can decide to use or not use a NIC in the team. Most commonly, this link is a simple link up or link down check in the server. If the link is reporting up, use the NIC; if a link is reporting down, do not use the NIC. In an embedded environment, this behavior can be a problem if the uplinks out of the embedded I/O module go down, but the internal link to the server is still up. In that case, the server is still reporting the NIC link as up, even though there is no path to the upstream network. This issue leads to the server sending traffic out a NIC that has no path out of the embedded I/O module and disrupts server communications. The Failover feature can be implemented in these environments. When the set of uplinks that the Failover feature is tracking go down, configurable internal ports also are taken down, which alerts the embedded server to a path fault in this direction. The server then can use the team or bond to select a different NIC and maintain network connectivity. An example of how failover can protect Compute Nodes in a PureFlex chassis when there is an uplink fault out of one of the I/O modules can be seen in Figure 1-4. How Failover Works ToR Switch 1 X NIC 1 Chassis ToR Switch 2 NIC 2 Figure 1-4 Example of Failover in action Without failover or some other form of remote link failure detection, embedded servers can be exposed to loss of connectivity if the uplink path on one of the embedded I/O modules fails. Designs that use vlag or some sort of cross chassis aggregation (such as stacking) are not exposed to this issue (and therefore, do not need the Failover feature) because they have a different coping method for dealing with uplinks out of an I/O module going down. For example, with vlag, the packets that must get upstream can cross the vlag ISL and use the other I/O modules uplinks to get to the upstream network. 14 NIC Virtualization in Lenovo Flex System Fabric Solutions

An Introduction to NIC Teaming with Lenovo Networking Switches

An Introduction to NIC Teaming with Lenovo Networking Switches Front cover An Introduction to NIC Teaming with Lenovo Networking Switches Shows how Network Interface Card (NIC) teaming can be configured Includes examples for Linux, Windows, and VMware Describes how

More information

Introduction to Windows Server 2016 Nested Virtualization

Introduction to Windows Server 2016 Nested Virtualization Front cover Introduction to Windows Server 2016 Nested Virtualization Introduces this new feature of Microsoft Windows Server 2016 Describes the steps how to implement nested virtualization Demonstrates

More information

Deploying Citrix MetaFrame on IBM eserver BladeCenter with FAStT Storage Planning / Implementation

Deploying Citrix MetaFrame on IBM eserver BladeCenter with FAStT Storage Planning / Implementation Deploying Citrix MetaFrame on IBM eserver BladeCenter with FAStT Storage Planning / Implementation Main Managing your Citrix MetaFrame solution on IBM eserver BladeCenter To better manage Citrix MetaFrame

More information

Introduction to PCI Express Positioning Information

Introduction to PCI Express Positioning Information Introduction to PCI Express Positioning Information Main PCI Express is the latest development in PCI to support adapters and devices. The technology is aimed at multiple market segments, meaning that

More information

Lenovo Networking Best Practices for Configuration and Installation

Lenovo Networking Best Practices for Configuration and Installation Front cover Lenovo Networking Best Practices for Configuration and Installation Benefit from the expansive knowledge of Lenovo Networking experts Discover design strategies to maximize network performance

More information

HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios

HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios Part number 603028-003 Third edition August 2010 Copyright 2009,2010 Hewlett-Packard Development Company, L.P.

More information

Intel I340 Ethernet Dual Port and Quad Port Server Adapters for System x Product Guide

Intel I340 Ethernet Dual Port and Quad Port Server Adapters for System x Product Guide Intel I340 Ethernet Dual Port and Quad Port Server Adapters for System x Product Guide Based on the new Intel 82580 Gigabit Ethernet Controller, the Intel Ethernet Dual Port and Quad Port Server Adapters

More information

TCP/IP ports on the CMM, IMM, IMM2, RSA II, BMC, and AMM management processors 1

TCP/IP ports on the CMM, IMM, IMM2, RSA II, BMC, and AMM management processors 1 TCP/IP ports on the CMM, IMM, IMM2, RSA II, BMC, and AMM management processors Reference Information Chassis Management Module (CMM) TCP/IP ports on the CMM, IMM, IMM2, RSA II, BMC, and AMM management

More information

VMware ESX Server 3 802.1Q VLAN Solutions W H I T E P A P E R

VMware ESX Server 3 802.1Q VLAN Solutions W H I T E P A P E R VMware ESX Server 3 802.1Q VLAN Solutions W H I T E P A P E R Executive Summary The virtual switches in ESX Server 3 support VLAN (IEEE 802.1Q) trunking. Using VLANs, you can enhance security and leverage

More information

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch A Whitepaper on Building Data Centers with Dell MXL Blade Switch Product Management Dell Networking October 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS

More information

Fiber Channel Over Ethernet (FCoE)

Fiber Channel Over Ethernet (FCoE) Fiber Channel Over Ethernet (FCoE) Using Intel Ethernet Switch Family White Paper November, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR

More information

Requesting Access to IBM Director Agent on Windows Planning / Implementation

Requesting Access to IBM Director Agent on Windows Planning / Implementation Requesting Access to IBM Director Agent on Windows Planning / Implementation Main When IBM Director Server first discovers a managed system, that system might be initially locked (represented by padlock

More information

IBM BladeCenter Layer 2-7 Gigabit Ethernet Switch Module (Withdrawn) Product Guide

IBM BladeCenter Layer 2-7 Gigabit Ethernet Switch Module (Withdrawn) Product Guide IBM BladeCenter Layer 2-7 Gigabit Ethernet Switch Module (Withdrawn) Product Guide The IBM BladeCenter Layer 2-7 Gigabit Ethernet Switch Module serves as a switching and routing fabric for the IBM BladeCenter

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led

Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led Course Description Configuring Cisco Nexus 5000 Switches (DCNX5K) v2.1 is a 5-day ILT training program that is designed

More information

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL

More information

Management Software. Web Browser User s Guide AT-S106. For the AT-GS950/48 Gigabit Ethernet Smart Switch. Version 1.0.0. 613-001339 Rev.

Management Software. Web Browser User s Guide AT-S106. For the AT-GS950/48 Gigabit Ethernet Smart Switch. Version 1.0.0. 613-001339 Rev. Management Software AT-S106 Web Browser User s Guide For the AT-GS950/48 Gigabit Ethernet Smart Switch Version 1.0.0 613-001339 Rev. A Copyright 2010 Allied Telesis, Inc. All rights reserved. No part of

More information

FIBRE CHANNEL OVER ETHERNET

FIBRE CHANNEL OVER ETHERNET FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today ABSTRACT Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,

More information

IBM RDX Removable Disk Backup Solution (Withdrawn) Product Guide

IBM RDX Removable Disk Backup Solution (Withdrawn) Product Guide IBM RDX Removable Disk Backup Solution (Withdrawn) Product Guide The new IBM RDX removable disk backup solution is designed to reliably and cost-effectively help protect your business's valuable assets.

More information

A Principled Technologies white paper commissioned by Dell Inc.

A Principled Technologies white paper commissioned by Dell Inc. A Principled Technologies white paper commissioned by Dell Inc. TABLE OF CONTENTS Table of contents... 2 Summary... 3 Features of Simple Switch Mode... 3 Sample scenarios... 5 Testing scenarios... 6 Scenario

More information

Network configuration for the IBM PureFlex System

Network configuration for the IBM PureFlex System Network configuration for the IBM PureFlex System Note: The initial configuration of your IBM PureFlex System will be performed by IBM services that are included with your purchase. To ensure their success,

More information

ADVANCED NETWORK CONFIGURATION GUIDE

ADVANCED NETWORK CONFIGURATION GUIDE White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4

More information

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Virtual PortChannels: Building Networks without Spanning Tree Protocol . White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed

More information

6Gb SAS Host Bus Adapter Product Guide

6Gb SAS Host Bus Adapter Product Guide 6Gb SAS Host Bus Adapter Product Guide The 6Gb SAS Host Bus Adapter (HBA) is an economical storage enabler to attach RAID-capable external storage enclosures and provide 3 Gbps or 6 Gbps tape storage connectivity

More information

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track** Course: Duration: Price: $ 4,295.00 Learning Credits: 43 Certification: Implementing and Troubleshooting the Cisco Cloud Infrastructure Implementing and Troubleshooting the Cisco Cloud Infrastructure**Part

More information

Implementing Disk Encryption on System x Servers with IBM Security Key Lifecycle Manager Solution Guide

Implementing Disk Encryption on System x Servers with IBM Security Key Lifecycle Manager Solution Guide Implementing Disk Encryption on System x Servers with IBM Security Key Lifecycle Manager Solution Guide Securing sensitive client and company data is becoming an IT task of paramount importance. Often

More information

IBM Network Advisor IBM Redbooks Product Guide

IBM Network Advisor IBM Redbooks Product Guide IBM Network Advisor IBM Redbooks Product Guide This IBM Redbooks Product Guide describes IBM Network Advisor Version 12.4. Although every network type has unique management requirements, most organizations

More information

Integration and Automation with Lenovo XClarity Administrator

Integration and Automation with Lenovo XClarity Administrator Integration and Automation with Lenovo XClarity Administrator Extend Management Processes to Existing Ecosystems Lenovo Enterprise Business Group April 2015 2015 Lenovo. All rights reserved. Introduction

More information

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

Brocade Enterprise 20-port, 20-port, and 10-port 8Gb SAN Switch Modules IBM BladeCenter at-a-glance guide

Brocade Enterprise 20-port, 20-port, and 10-port 8Gb SAN Switch Modules IBM BladeCenter at-a-glance guide Brocade Enterprise 20-port, 20-port, and 10-port 8Gb SAN Switch Modules IBM BladeCenter at-a-glance guide The Brocade Enterprise 20-port, 20-port, and 10-port 8 Gb SAN Switch Modules for IBM BladeCenter

More information

A Platform Built for Server Virtualization: Cisco Unified Computing System

A Platform Built for Server Virtualization: Cisco Unified Computing System A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease

More information

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

iscsi SANs Don t Have To Suck

iscsi SANs Don t Have To Suck iscsi SANs Don t Have To Suck Derek J. Balling, Answers.com Summary: We have created a iscsi SAN architecture which permits maintenance of network components without any downtime,

More information

VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER

VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER TECHNICAL WHITE PAPER Table of Contents Intended Audience.... 3 Overview.... 3 Virtual SAN Network... 3 Physical Network Infrastructure... 4 Data Center Network... 4 Host Network Adapter.... 5 Virtual

More information

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance This white paper compares the performance of blade-to-blade network traffic between two enterprise blade solutions: the Dell

More information

Using Virtual Switches in PowerVM to Drive Maximum Value of 10 Gb Ethernet

Using Virtual Switches in PowerVM to Drive Maximum Value of 10 Gb Ethernet Using Virtual Switches in PowerVM to Drive Maximum Value of 10 Gb Ethernet by Glenn E. Miller Certified IT Specialist Power Systems, AIX and PowerHA IBM Corporation and Kris Speetjens IT Architect Nobius

More information

Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide

Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide The Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter enables high-performance connection

More information

Link Aggregation Interoperability of the Dell PowerConnect 5316M with Cisco IOS or Cisco CatOS based Switches. By Bruce Holmes

Link Aggregation Interoperability of the Dell PowerConnect 5316M with Cisco IOS or Cisco CatOS based Switches. By Bruce Holmes Link Aggregation Interoperability of the Dell PowerConnect 5316M with Cisco IOS or Cisco CatOS based Switches By Bruce Holmes August 2005 Contents Introduction...3 Link Aggregation with Gigabit Ethernet

More information

QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide

QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide The QLogic 4Gb Fibre Channel Expansion Card (CIOv) for BladeCenter enables you to quickly and simply

More information

hp ProLiant network adapter teaming

hp ProLiant network adapter teaming hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2

More information

Industry-based CLI (Cisco like) and Networking OS graphical user interface

Industry-based CLI (Cisco like) and Networking OS graphical user interface Specifications Traffic management/routing (Optimized for best performance) Security CLI Secure management Configuration tracking IPv6 management Quality of Service (QoS) (metering, remarking, DSCP/CoS)

More information

PassTest. Bessere Qualität, bessere Dienstleistungen!

PassTest. Bessere Qualität, bessere Dienstleistungen! PassTest Bessere Qualität, bessere Dienstleistungen! Q&A Exam : 642-999 Title : Implementing Cisco Data Center Unified Computing Version : Demo 1 / 5 1.When upgrading a standalone Cisco UCS C-Series server,

More information

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Enterprise Strategy Group Getting to the bigger truth. ESG Lab Review Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Date: June 2016 Author: Jack Poller, Senior

More information

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center info@globalknowledge.net www.globalknowledge.net Planning for the Redeployment of

More information

Emulex Networking and Converged Networking Adapters for ThinkServer Product Guide

Emulex Networking and Converged Networking Adapters for ThinkServer Product Guide Networking and Converged Networking Adapters for ThinkServer Product Guide The OCe14000 family of 10 Gb Ethernet Networking and Converged Networking Adapters for ThinkServer builds on the foundation of

More information

IBM Flex System FC5022 2-port 16Gb FC Adapter IBM Redbooks Product Guide

IBM Flex System FC5022 2-port 16Gb FC Adapter IBM Redbooks Product Guide IBM Flex System FC5022 2-port 16Gb FC Adapter IBM Redbooks Product Guide The network architecture on the IBM Flex System platform has been specifically designed to address network challenges, giving you

More information

Redpaper. IBM Flex System Networking in an Enterprise Data Center. Front cover. ibm.com/redbooks

Redpaper. IBM Flex System Networking in an Enterprise Data Center. Front cover. ibm.com/redbooks Front cover IBM Flex System Networking in an Enterprise Data Center Describes evolution of enterprise data center networking infrastructure Describes networking architecture and portfolio Provides in-depth

More information

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK KEY FEATURES PROVISION FROM BARE- METAL TO PRODUCTION QUICKLY AND EFFICIENTLY Controlled discovery with active control of your hardware Automatically

More information

MLAG on Linux - Lessons Learned. Scott Emery, Wilson Kok Cumulus Networks Inc.

MLAG on Linux - Lessons Learned. Scott Emery, Wilson Kok Cumulus Networks Inc. MLAG on Linux - Lessons Learned Scott Emery, Wilson Kok Cumulus Networks Inc. Agenda MLAG introduction and use cases Lessons learned MLAG control plane model MLAG data plane Linux kernel requirements Other

More information

VXLAN Performance Evaluation on VMware vsphere 5.1

VXLAN Performance Evaluation on VMware vsphere 5.1 VXLAN Performance Evaluation on VMware vsphere 5.1 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 VXLAN Performance Considerations... 3 Test Configuration... 4 Results... 5

More information

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Top of Rack: An Analysis of a Cabling Architecture in the Data Center SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.

More information

Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results. May 1, 2009

Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results. May 1, 2009 Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results May 1, 2009 Executive Summary Juniper Networks commissioned Network Test to assess interoperability between its EX4200 and EX8208

More information

EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE

EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE EXECUTIVE SUMMARY Enterprise network managers are being forced to do more with less. Their networks are growing in size and complexity. They need

More information

COMPLEXITY AND COST COMPARISON: CISCO UCS VS. IBM FLEX SYSTEM (REVISED)

COMPLEXITY AND COST COMPARISON: CISCO UCS VS. IBM FLEX SYSTEM (REVISED) COMPLEXITY AND COST COMPARISON: CISCO UCS VS. IBM FLEX SYSTEM (REVISED) Not all IT architectures are created equal. Whether you are updating your existing infrastructure or building from the ground up,

More information

IP SAN Fundamentals: An Introduction to IP SANs and iscsi

IP SAN Fundamentals: An Introduction to IP SANs and iscsi IP SAN Fundamentals: An Introduction to IP SANs and iscsi Updated April 2007 Sun Microsystems, Inc. 2007 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, CA 95054 USA All rights reserved. This

More information

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01 ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

Juniper / Cisco Interoperability Tests. August 2014

Juniper / Cisco Interoperability Tests. August 2014 Juniper / Cisco Interoperability Tests August 2014 Executive Summary Juniper Networks commissioned Network Test to assess interoperability, with an emphasis on data center connectivity, between Juniper

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Solutions for Large Environments Virtualization Solutions Engineering Ryan Weldon and Tom Harrington THIS WHITE PAPER IS FOR

More information

HP Converged Infrastructure Solutions

HP Converged Infrastructure Solutions HP Converged Infrastructure Solutions HP Virtual Connect and HP StorageWorks Simple SAN Connection Manager Enterprise Software Solution brief Executive summary Whether it is with VMware vsphere, Microsoft

More information

BUILDING A NEXT-GENERATION DATA CENTER

BUILDING A NEXT-GENERATION DATA CENTER BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking

More information

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Connect Executive Overview This white paper describes how Cisco VFrame Server Fabric ization Software works with IBM BladeCenter H to provide

More information

Advanced Network Services Teaming

Advanced Network Services Teaming Advanced Network Services Teaming Advanced Network Services (ANS) Teaming, a feature of the Advanced Network Services component, lets you take advantage of multiple adapters in a system by grouping them

More information

Cloud-ready network architecture

Cloud-ready network architecture IBM Systems and Technology Thought Leadership White Paper May 2011 Cloud-ready network architecture 2 Cloud-ready network architecture Contents 3 High bandwidth with low latency 4 Converged communications

More information

Implementing Cisco Data Center Unified Computing (DCUCI)

Implementing Cisco Data Center Unified Computing (DCUCI) Certification CCNP Data Center Implementing Cisco Data Center Unified Computing (DCUCI) 5 days Implementing Cisco Data Center Unified Computing (DCUCI) is designed to serve the needs of engineers who implement

More information

Adding Traffic Sources to a Monitoring Session, page 7 Activating a Traffic Monitoring Session, page 8 Deleting a Traffic Monitoring Session, page 9

Adding Traffic Sources to a Monitoring Session, page 7 Activating a Traffic Monitoring Session, page 8 Deleting a Traffic Monitoring Session, page 9 This chapter includes the following sections: Traffic Monitoring, page 1 Guidelines and Recommendations for Traffic Monitoring, page 2 Creating an Ethernet Traffic Monitoring Session, page 3 Setting the

More information

How To Set Up A Virtual Network On Vsphere 5.0.5.2 (Vsphere) On A 2Nd Generation Vmkernel (Vklan) On An Ipv5 Vklan (Vmklan)

How To Set Up A Virtual Network On Vsphere 5.0.5.2 (Vsphere) On A 2Nd Generation Vmkernel (Vklan) On An Ipv5 Vklan (Vmklan) Best Practices for Virtual Networking Karim Elatov Technical Support Engineer, GSS 2009 VMware Inc. All rights reserved Agenda Best Practices for Virtual Networking Virtual Network Overview vswitch Configurations

More information

Deploying 10 Gigabit Ethernet on VMware vsphere 4.0 with Cisco Nexus 1000V and VMware vnetwork Standard and Distributed Switches - Version 1.

Deploying 10 Gigabit Ethernet on VMware vsphere 4.0 with Cisco Nexus 1000V and VMware vnetwork Standard and Distributed Switches - Version 1. Deploying 10 Gigabit Ethernet on VMware vsphere 4.0 with Cisco Nexus 1000V and VMware vnetwork Standard and Distributed Switches - Version 1.0 Table of Contents Introduction...3 Design Goals...3 VMware

More information

Chapter 3. Enterprise Campus Network Design

Chapter 3. Enterprise Campus Network Design Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c

More information

What is VLAN Routing?

What is VLAN Routing? Application Note #38 February 2004 What is VLAN Routing? This Application Notes relates to the following Dell product(s): 6024 and 6024F 33xx Abstract Virtual LANs (VLANs) offer a method of dividing one

More information

Windows Host Utilities 6.0.2 Installation and Setup Guide

Windows Host Utilities 6.0.2 Installation and Setup Guide Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277

More information

Switch Web GUI Quick Configuration Guide for

Switch Web GUI Quick Configuration Guide for Switch Web GUI Quick Configuration Guide for SSE-G48-TG4 SSE-G24-TG4 SSE-X24S SSE-X24SR SSE-X3348S SSE-X3348SR SSE-X3348T SSE-X3348TR SBM-GEM-X2C SBM-GEM-X2C+ SBM-GEM-X3S+ SBM-XEM-X10SM Release: 1.0c 1

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

Using MLAG in Dell Networks

Using MLAG in Dell Networks dd version Using MLAG in Dell Networks A deployment guide for Dell Networking switches (version ) Dell Engineering March 04 January 04 A Dell Deployment and Configuration Guide Revisions Date Description

More information

White Paper. Advanced Server Network Virtualization (NV) Acceleration for VXLAN

White Paper. Advanced Server Network Virtualization (NV) Acceleration for VXLAN White Paper Advanced Server Network Virtualization (NV) Acceleration for VXLAN August 2012 Overview In today's cloud-scale networks, multiple organizations share the same physical infrastructure. Utilizing

More information

Building Tomorrow s Data Center Network Today

Building Tomorrow s Data Center Network Today WHITE PAPER www.brocade.com IP Network Building Tomorrow s Data Center Network Today offers data center network solutions that provide open choice and high efficiency at a low total cost of ownership,

More information

SN0054584-00 A. Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP

SN0054584-00 A. Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP SN0054584-00 A Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Information

More information

Fibre Channel over Ethernet: Enabling Server I/O Consolidation

Fibre Channel over Ethernet: Enabling Server I/O Consolidation WHITE PAPER Fibre Channel over Ethernet: Enabling Server I/O Consolidation Brocade is delivering industry-leading oe solutions for the data center with CNAs, top-of-rack switches, and end-of-row oe blades

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until

More information

How To Configure Link Aggregation On Supermicro Switch 2 And 3 (Lan) On A Microsoft Vlan 2 And Vlan 3 (Vlan) (Lan 2) (Vlans) (Lummer) (Powerline) (

How To Configure Link Aggregation On Supermicro Switch 2 And 3 (Lan) On A Microsoft Vlan 2 And Vlan 3 (Vlan) (Lan 2) (Vlans) (Lummer) (Powerline) ( L2 / L3 Switches Link Aggregation Configuration Guide Revision 1.0 The information in this USER S MANUAL has been carefully reviewed and is believed to be accurate. The vendor assumes no responsibility

More information

Abstract. MEP; Reviewed: GAK 10/17/2005. Solution & Interoperability Test Lab Application Notes 2005 Avaya Inc. All Rights Reserved.

Abstract. MEP; Reviewed: GAK 10/17/2005. Solution & Interoperability Test Lab Application Notes 2005 Avaya Inc. All Rights Reserved. Configuring Single Instance Rapid Spanning Tree Protocol (RSTP) between an Avaya C360 Converged Switch and HP ProCurve Networking Switches to support Avaya IP Telephony Issue 1.0 Abstract These Application

More information

How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1

How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1 How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1 Technical Brief v1.0 February 2013 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED

More information

Chapter 4: Spanning Tree Design Guidelines for Cisco NX-OS Software and Virtual PortChannels

Chapter 4: Spanning Tree Design Guidelines for Cisco NX-OS Software and Virtual PortChannels Design Guide Chapter 4: Spanning Tree Design Guidelines for Cisco NX-OS Software and Virtual PortChannels 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

More information

UCS Network Utilization Monitoring: Configuration and Best Practice

UCS Network Utilization Monitoring: Configuration and Best Practice UCS Network Utilization Monitoring: Configuration and Best Practice Steve McQuerry Technical Marketing Engineer Unified Computing Systems Cisco Systems, Inc. Document Version 1.0 1 Copyright 2013 Cisco

More information

Cisco Nexus 5548UP. Switch Configuration Guide for Dell PS Series SANs. A Dell Deployment and Configuration Guide

Cisco Nexus 5548UP. Switch Configuration Guide for Dell PS Series SANs. A Dell Deployment and Configuration Guide Cisco Nexus 5548UP Switch Configuration Guide for Dell PS Series SANs Dell Storage Engineering October 2015 A Dell Deployment and Configuration Guide Revisions Date February 2013 October 2013 March 2014

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

How To Design A Data Centre

How To Design A Data Centre DATA CENTRE TECHNOLOGIES & SERVICES RE-Solution Data Ltd Reach Recruit Resolve Refine 170 Greenford Road Harrow Middlesex HA1 3QX T +44 (0) 8450 031323 EXECUTIVE SUMMARY The purpose of a data centre is

More information

Virtual Machine in Data Center Switches Huawei Virtual System

Virtual Machine in Data Center Switches Huawei Virtual System Virtual Machine in Data Center Switches Huawei Virtual System Contents 1 Introduction... 3 2 VS: From the Aspect of Virtualization Technology... 3 3 VS: From the Aspect of Market Driving... 4 4 VS: From

More information

Demartek June 2012. Broadcom FCoE/iSCSI and IP Networking Adapter Evaluation. Introduction. Evaluation Environment

Demartek June 2012. Broadcom FCoE/iSCSI and IP Networking Adapter Evaluation. Introduction. Evaluation Environment June 212 FCoE/iSCSI and IP Networking Adapter Evaluation Evaluation report prepared under contract with Corporation Introduction Enterprises are moving towards 1 Gigabit networking infrastructures and

More information

What s New in VMware vsphere 5.5 Networking

What s New in VMware vsphere 5.5 Networking VMware vsphere 5.5 TECHNICAL MARKETING DOCUMENTATION Table of Contents Introduction.................................................................. 3 VMware vsphere Distributed Switch Enhancements..............................

More information

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet CCNA R&S: Introduction to Networks Chapter 5: Ethernet 5.0.1.1 Introduction The OSI physical layer provides the means to transport the bits that make up a data link layer frame across the network media.

More information

Monitoring Traffic. Traffic Monitoring. This chapter includes the following sections:

Monitoring Traffic. Traffic Monitoring. This chapter includes the following sections: Monitoring Traffic This chapter includes the following sections: Traffic Monitoring, page 1 Guidelines and Recommendations for Traffic Monitoring, page 2 Creating an Ethernet Traffic Monitoring Session,

More information

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Microsoft Hyper-V Cloud Fast Track with Lenovo Flex System and NetApp FAS8040

Microsoft Hyper-V Cloud Fast Track with Lenovo Flex System and NetApp FAS8040 Front cover Microsoft Hyper-V Cloud Fast Track with Lenovo Flex System and NetApp FAS8040 Last Update: 23 September 2015 Based on Flex System x240 M5 Compute Nodes and NetApp FAS8040 Storage Uses Windows

More information

Lenovo Partner Pack for System Center Operations Manager

Lenovo Partner Pack for System Center Operations Manager Lenovo Partner Pack for System Center Operations Manager Lenovo Enterprise Product Group Version 1.0 December 2013 2013 Lenovo. All rights reserved. Legal Disclaimers: First paragraph is required. Trademark

More information

Data Center Convergence. Ahmad Zamer, Brocade

Data Center Convergence. Ahmad Zamer, Brocade Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations

More information

Open Cloud Networking: Unlocking the Full Potential of Cloud Computing. A Dell Technical White Paper

Open Cloud Networking: Unlocking the Full Potential of Cloud Computing. A Dell Technical White Paper Open Cloud Networking: Unlocking the Full Potential of Cloud Computing A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information