Converged networks with Fibre Channel over Ethernet and Data Center Bridging

Size: px
Start display at page:

Download "Converged networks with Fibre Channel over Ethernet and Data Center Bridging"

Transcription

1 Converged networks with Fibre Channel over Ethernet and Data Center Bridging Technology brief, 2 nd edition Introduction... 2 Traditional data center topology... 2 Early attempts at converged networks... 2 Network convergence with FCoE Gigabit Ethernet... 4 HP Virtual Connect Flex HP Virtual Connect FlexFabric with FCoE... 5 Emerging standards for network convergence... 5 FCoE standard... 5 FCoE protocol encapsulation... 5 Fibre Channel Forwarder... 6 ENode... 6 FCoE and Ethernet... 7 DCB standards... 8 Priority-based Flow Control... 8 Enhanced Transmission Selection Quantized Congestion Notification Data Center Bridging Exchange Migrating to converged fabrics HP strategy For more information Call to action... 20

2 Introduction Using application-specific networks for data, management, and storage is complex and costly. Network convergence is a more economical solution that simplifies your data center management by partially or completely consolidating all block-based storage and Ethernet-based data communications networks onto a single fabric. Any network topology constructed with one or more switched network nodes is a fabric. Converged networks consolidate two or more network types onto a single fabric. The promise of network convergence is that it will reduce the cost of qualifying, buying, powering, cooling, provisioning, maintaining, and managing network-related equipment. The challenge is determining the best adoption strategy for your business. This technology brief does the following for you: Defines converged networks Summarizes previous attempts to create them Explains Fibre Channel over Ethernet (FCoE) technology Describes how converged network topologies and converged network adapters (CNAs) work together to tie multiple networks into a single, converged infrastructure Introduces the networking standards required to support this new breed of converged networks Explains how the new standards will affect how you will design and deploy your converged network infrastructure over the next several years Traditional data center topology Traditional data centers typically have underused capacity, inflexible single-purpose resources, and high management costs. Typical designs of data center infrastructure include separate, heterogeneous network devices for different types of data. Each device adds to the complexity, cost, and management overhead. Many datacenters support three or more types of networks that serve these purposes: Block storage data management Remote management Business-centric data communications Multiple types of networks require unique switches, network adapters, and network management systems and technology to unify these networks. Early attempts at converged networks There have been many attempts to create converged networks over the past decade. Fibre Channel Protocol (FCP) is a lightweight mapping of SCSI to the Fibre Channel (FC) layers 1 and 2 transport protocol (Figure1, left). FC carries not only FCP traffic, but also IP traffic, to create a converged network. The cost of FC and the acceptance of Ethernet as the de-facto standard for LAN communications prevented widespread FC use except for data center SANs for enterprise businesses. InfiniBand (IB) technology provides a converged network capability by transporting inter-processor communication, LAN, and storage protocols. The two most common storage protocols for IB are SCSI Remote Direct Memory Access Protocol (SRP) and iscsi Extensions for RDMA (iser). These protocols use the RDMA capabilities of IB. SRP builds a direct SCSI to RDMA mapping layer and protocol, and iser copies data directly to the SCSI I/O buffers without intermediate data copies (Figure 1, left of center). These protocols are lightweight but not as streamlined as FC. Widespread deployment was impractical because of the perceived high cost of IB and the complex gateway and routers needed to translate from these 2

3 IB-centric protocols/networks to the native FC storage devices in data centers. High Performance Computing (HPC) environments that have adopted IB as the standard transport network use SRP and iser protocols. Figure 1. Comparison of multiple protocol stacks for converged networks Fibre InfiniBand FCoE/DCB Channel Internet SCSI (iscsi) was an attempt to bring a direct SCSI to TCP/IP mapping layer and protocol to the mass Ethernet market, to drive costs lower, and to allow deploying SANs over existing Ethernet LAN infrastructure. iscsi technology (Figure 1, center) was very appealing to the small and medium business market because of the low-cost software initiators and the ability to use any existing Ethernet LAN. However, iscsi typically requires new iscsi storage devices that lack the features in devices using FC interfaces. Also, iscsi to FC gateways and routers are very complex and expensive. They do not scale cost effectively for the enterprise. Most enterprise businesses have avoided iscsi or have used it for lower tier storage applications or for departmental use. FC over IP (FCIP) and Internet FC Protocol (ifcp) map FCP and FC characteristics to LANs, MANs, and WANs. Both of these protocols map FC framing on top of the TCP/IP protocol stack (Figure 1, right of center). FCIP is a SAN extension protocol to bridge FC SANs across large geographical areas. It is not for host/server or target/storage attachment. The ifcp protocol allows Ethernet-based hosts to attach to FC SANs through ifcp-to-fc SAN gateways. These gateways and protocols were never widely adopted except for SAN extension because of their complexity, lack of scalability, and cost. Network convergence with FCoE FCoE is the next attempt to converge block storage protocols onto Ethernet. FCoE relies on an Ethernet infrastructure that uses a new set of Data Center Bridging (DCB) standards defined by the IEEE (Figure 1, right). Converged Enhanced Ethernet (CEE) is Ethernet infrastructure that implements DCB. Although the DCB standards can apply to any IEEE 802 network, most use it to refer to enhanced Ethernet, making DCB 3

4 and CEE equivalent terms. We use the term DCB to refer to an Ethernet infrastructure that implements at least the minimum set of DCB standards to carry FCoE protocols. A traffic class (TC) is a traffic management element. DCB enhances low-level Ethernet protocols to send different traffic classes to their appropriate destinations. It also supports lossless behavior for selected TCs, for example, those that carry block storage data. FCoE with DCB tries to mimic the lightweight nature of native FC protocols. It does not incorporate TCP or even IP protocols. This means that FCoE is a nonroutable protocol meant for local deployment within a data center. The main advantage of FCoE is that switch vendors can easily implement the logic for converting FCoE/DCB to native FC in high performance switch silicon. FCoE solutions should cost less as they become widely used. 10 Gigabit Ethernet One obstacle to using Ethernet for converged networks has been its limited bandwidth. As 10 Gigabit Ethernet (10 GbE) technology becomes more widely used, 10 GbE network components will fulfill the combined data and storage communication needs of many applications. With 10 GbE, converged Ethernet switching fabrics handle multiple TCs for many data center applications. DCB-capable Ethernet gives you maximum flexibility in selecting network management tools. As Ethernet bandwidth increases, fewer physical links can carry more data (Figure 2). Figure 2. Multiple traffic types sharing the same link HP Virtual Connect Flex-10 Virtual Connect (VC) Flex-10 technology lets you partition the Ethernet bandwidth of each 10 Gb Ethernet port into up to four FlexNICs. The FlexNICs function and appear to the system as discrete physical NICs, each with its own PCI function and driver instance. The partitioning must be in increments of 100 Mb. While FlexNICs share the same physical port, traffic flow for each is isolated with its own MAC address and VLAN tags between the FlexNIC and associated VC Flex-10 module. Using the VC Manager CLI or GUI, you can set and control the transmit bandwidth available to each FlexNIC according to server workload needs. With the VC Flex-10 modules now available, each dual-port Flex-10 enabled server or mezzanine card supports up to eight FlexNICs, four on each physical port. Each VC Flex-10 module can support up to 64 FlexNICs. Flex-10 adds LAN convergence to VC s virtual I/O technology. It aggregates up to four separate traffic streams into a single 10 Gb pipe connecting to VC modules. VC then routes the frames to the appropriate external networks. This lets you consolidate and better manage physical connections, optimize bandwidth, and reduce cost. 4

5 HP Virtual Connect FlexFabric with FCoE Now that we have achieved an acceptable level of LAN convergence with Flex-10 technology, the next logical step is to add LAN/SAN convergence technology. Virtual Connect FlexFabric broadens Virtual Connect Flex-10 technology to provide solutions for converging different network protocols. We plan to deliver the FlexFabric vision by converging technology, management tools, and partner product portfolios into a virtualized fabric for the data center. Emerging standards for network convergence Converged networks require new standards. The International Committee for Information Technology Standards (INCITS) T11 technical committee creates the standards that relate to storage and storage networking based technologies. The IEEE Work Group is responsible for developing two types of standards: Standards common to all IEEE 802 defined network types (for example, Ethernet and Token-Ring) Standards necessary to support communication within and between these network types. FCoE standard FCoE is an emerging technology under development by the INCITS T11 technical committee. INCITS/ANSI T11.3 FC-BB-5 is the official standard. It includes two protocol definitions: FCoE and FCoE Initialization Protocol (FIP). The FCoE protocol defines the encapsulation of FC frames into Ethernet frames. FIP defines a fabric discovery protocol, creates an Ethernet version of FC fabric login services, and defines the protocols for handling MAC address assignment and association with World Wide Names (WWNs). FCoE relies on improved flow control, well-defined traffic shaping, and multiple TC support that IEEE DCB standards provide. FCoE protocol encapsulation FCoE is different from previous attempts to move SCSI traffic over Ethernet. The FCoE protocol allows efficient, high performance conversion between FCoE links and FC links in layer 2 switches. DCB enhancements offer lossless operation for some TCs. This lets us place the FC protocol directly on top of layer 2 (link layer) Ethernet, so we don t have to rely on more complex transport protocols such as TCP to ensure lossless behavior. Implementing FCoE in this way lets us develop devices such as adapters and switches that use most of the existing FC logic on top of the new DCB/Ethernet physical interfaces. The FCoE protocol encapsulation standard requires IEEE 802.1Q tags. Each FCoE frame contains explicit TC/priority tags for efficient processing in layer 2 DCB-capable Ethernet switches. Data centers deploy FCoE for intra-data center use with a similar span as a switched LAN subnet or SAN fabric because FCoE is a layer 2 protocol and does not use the layer 3 IP protocol. FCoE encapsulates FC frames, including FC frame delimiters, headers, payload, and frame check sequence, within the Ethernet frames using a format illustrated in Figure 3. 5

6 Figure 3. Illustration of an FCoE frame Header Start of Frame Header Payload Frame Check End of Frame Layer 2 encapsulation provides several advantages to FCoE over previous converged network implementations: Because devices use existing FC logic, FCoE devices use existing FC driver models for the new converged network adapters. We can easily implement FCoE in switches because the logic necessary to convert between FCoE and FC is simple. Existing FC security and management operations, procedures, and applications do not change when using an FCoE/DCB infrastructure for a partial or completely converged network. FCoE takes advantage of a lossless 10 GbE fabric with significantly higher bandwidth than 8 Gb FC fabrics (actually 6.4 Gb plus encoding overhead in the FC protocol). Future protocols can use enhanced DCB Ethernet features that support FCoE. Fibre Channel Forwarder Fibre Channel Forwarder (FCF) is a function within a switch that acts as a translation point that supports converting FCoE traffic between DCB-enabled Ethernet ports and native FC ports. There is one FCF function in a switch for each upstream FC fabric connected to the FC ports of that switch. In other words, there can be more than one FCF function in a switch. An FCF also provides the portal where converged network adapters access the traditional SAN fabric services, for example fabric login, name services, and zoning services. When first initialized, converged network adapters discover the available FCFs in a DCB network. Through management direction, they attach themselves to at least one FCF to begin communication with a SAN fabric. During fabric login, FCFs provide the mechanism that negotiates the MAC address provisioning to the FCoE portion of a converged network adapter. The most commonly used mechanism is Fabric Provisioned MAC Addresses, or FPMA. It operates as FC addresses in an FC network where the address used in the frames is allocated at fabric login time. This is different from normal Ethernet NIC functions, which typically have a static address burned in to them in the factory. ENode ENode is a device that takes the place of the traditional LAN NIC and the FC HBA in a host or server. It is commonly called a converged network adapter (CNA). It provides both data communications and block storage communications through a converged network implemented with DCB-capable Ethernet. An ENode merges the traffic from the NIC and from the SCSI/FC functions into a stream of Ethernet frames to the DCBenabled Ethernet network. Within the DCB network, a DCB/FCoE/FC switch disaggregates the converged traffic streams and sends the different TCs to their appropriate destinations: legacy LANs, legacy FC nodes, or DCB network nodes. 6

7 The ENode (Figure 4) consists of these components: FCoE Controller uses FCoE Initialization Protocol (FIP) to discover the SAN fabrics through the FCFs and provisions the virtual N_Ports (VN_Ports) and FCoE Link End Points (LEPs). FCoE LEPs convert FC frames to FCoE frames on the transmit side, and convert FCoE frames to FC frames on the receive side. There is one LEP for each VN_Port established in the ENode. VN_Ports instantiate virtual N_Ports with N_Port ID Virtualization (NPIV) capability similar to a traditional FC HBA. The VN_Ports in an ENode include information about the MAC address to WWN translations required for proper communications with FCFs in a converged network. FC Function is the traditional logic implemented in an FC HBA. It handles storage discovery, storage connection management, error recovery, and host bus (PCIe) interface interoperation to upper layer driver/scsi drivers. Again, this function behaves so much like an FC HBA function that CNAs and HBAs from the same vendors typically use the same storage drivers in the host operating systems to control them. This makes deploying both converged and non-converged systems in a data center very easy during the transition to a converged infrastructure. Figure 4. FCoE architecture components FCoE and Ethernet FCoE requires DCB-enabled Ethernet. The IEEE is working to enhance the IEEE 802 network standards to allow FC, or any TC requiring lossless behavior, to run efficiently over many types of IEEE 802 compliant, MAC layer protocols, including Ethernet. We expect the FCoE standard ratification in late It is important to understand that FCoE will not work on legacy Ethernet networks because it requires a lossless form of Ethernet. FC cannot handle dropped frames as Ethernet allows today. It is possible to create a lossless Ethernet network using existing IEEE 802.3x flow control mechanisms. If the network carries multiple 7

8 TCs, the existing mechanisms can cause Quality of Service (QoS) issues, limit the ability to scale a network, and affect performance. DCB standards DCB is not just the name for a set of new standards the IEEE is developing. It is a term often used for Ethernet designed to carry multiple TCs, some with lossless behavior. You can think of DCB-enabled Ethernet as applying the DCB standards to IEEE Ethernet standards to create a new set of products to implement this improved version of Ethernet. The change from legacy Ethernet to DCB-enabled Ethernet requires hardware and software changes, so you can t upgrade legacy Ethernet NICs and switches with DCB support to carry FCoE traffic. Fortunately, you only have to update the data paths in a data center that carry FCoE with DCB-enabled Ethernet devices. For full end-to-end data center use, all equipment manufacturers must agree to adopt four new IEEE protocols. The proposed standards are still under development, and full ratification of the complete set may take until late 2010 or One result of these ongoing standardization efforts is that DCB/FCoE products offered on the market today will likely need frequent software upgrades or even new hardware by the time DCB/FCoE technology is fully mature. The DCB Task Group within the IEEE Higher Layer LAN Protocols Work Group is defining DCB for protocols and technologies that apply to data center-oriented LAN communications. The standards they develop apply to all IEEE 802 network types, but they implicitly target Ethernet for primary implementation. Table 1 lists four new technologies defined in three DCB draft standards. Table 1. DCB draft standards for IEEE 802 networks Draft standard IEEE Qbb IEEE Qaz IEEE 802.1Qau New technology Priority-based Flow Control (PFC) Enhanced Transmission Selection (ETS) DCB Capability Exchange Protocol (DCBX) Quantized Congestion Notification (QCN) These standards serve three general purposes: Allow IEEE 802 LANs to carry multiple traffic classes Support lossless behavior on a subset of these traffic classes Formally define standard frame transmission scheduling mechanisms to support multiple traffic classes. You don t have to use all four of these protocols to implement a DCB network, and you don t need to use all options available in each protocol. However, if vendors do not implement the entire set, products may limit the possible scale or features. Because the standards are evolving, current DCB/FCoE products do not implement all of these protocols or all their supported options. Therefore, we must discuss their deployment limitations within a data center. Priority-based Flow Control Legacy FC networks support a link-level flow control mechanism known as buffer-to-buffer or credit-based flow control. This lightweight, high performance mechanism lets FC work in a lossless manner. Credit-based flow control provides a reliable layer 2 network required for block storage traffic, for example SCSI. To transport FC and SCSI protocols over Ethernet and maintain a lightweight implementation, we recommend providing a similar mechanism for Ethernet networks. Legacy Ethernet uses a simple flow control mechanism. It uses pause frames to let a congested network device port on an Ethernet NIC or switch tell its link partner to pause all traffic for a specified time. This approach can limit performance when a network device port has multiple queues for receiving incoming 8

9 frames of varying priority or TCs: If one queue becomes full, the device must send a pause frame to the other side of the link. This pauses all traffic, regardless of TC/priority. Supporting lossless behavior of block storage protocols on legacy Ethernet networks requires using legacy pause frames. However, this forces all traffic to be lossless on that link. The most bursty or bandwidth driven TCs dictate the behaviors of all TCs. Many types of traffic flows, for example real-time audio/video data streams, don t require lossless transmission and don t perform well on a lossless link. Even traditional TCPbased traffic flows optimized for lossy communications environments often don t perform well in lossless environments that transport different classes of traffic with vastly different characteristics simultaneously. In Figure 5, low-bandwidth, latency-sensitive traffic for voice/video/financial transactions (green) and higher bandwidth bulk traffic for storage (red) are sent on a link. The receiving device has two sets of queues for receiving and storing data, one for green traffic and the other for red traffic. In this example, the high bandwidth bulk traffic will fill the red receive queue. Although the green traffic has plenty of queue space available, the receiving device sends a pause frame because the red queue is full. The transmitting device receives this pause frame and stops all traffic on the link. Long delays interrupt the low latency traffic. Figure 5. Legacy pause-based flow control Priority-based Flow Control (PFC) uses a modified version of the pause frame called a Per Priority Pause (PPP) frame. PPP allows the pause frame to specify which priorities, and thus which TCs, to pause. PFC uses the priority levels in the class of service fields of the 802.1Qbb PPP frame header. When a network device has one or more receive queues that are nearly full, it constructs a PPP frame to send to the remote link partner. The remote device examines the class of service fields to determine which priorities/tcs to pause. The port s transmit function will stop sending the priorities/tcs going to the full ingress queues on the congested device without affecting priorities/tcs going to unfilled queues on the congested device. 9

10 Figure 6 illustrates the same scenario up to the point where the receiving node needs to send a pause frame. A PPP frame dictates pausing the red TC. The pause takes advantage of the class of service fields to restrict the pause to only classes of traffic that have nearly full queues. The transmitting station stops sending red traffic; the latency-sensitive green traffic continues to flow properly. Figure 6. PFC-based flow control Receive queues in a DCB Ethernet device will have high and low watermarks. If the queues fill up to the high watermark, the device generates a PPP frame. If the level of the queue drops below the low watermark, the device will send a PPP frame specifying a zero time to indicate that the link partner may send traffic for the affected TCs immediately. This allows an XON/XOFF-type operation on a per priority/tc. PPP frames allow a single frame to specify XON/XOFF behavior independently for any of up to eight priorities/tcs. This reduces the control frame overhead if devices support PFC on multiple TCs. The FCoE protocol requires DCB-enabled Ethernet devices to support only one PFC-enabled priority/tc. Not all eight priorities/tcs must support PFC, and not all priorities/tcs have to support PFC. Many devices on the market today support only one PFC-enabled priority/tc. In the future, devices should support a greater number of PFC-enabled priorities/tcs, but that is not required for basic FCoE transport over DCB-enabled Ethernet links. Enhanced Transmission Selection Legacy Ethernet supports multiple traffic management elements called traffic classes (TCs). IEEE 802.1Q (VLAN) tags with a class of service (CoS) field assign a transmission priority to each TC. You can implement up to eight TCs (TC0 through TC7) in an Ethernet device. Current standards and product implementations focus on transmitting the traffic classes in strict priority order. For applications operating completely at layer 2, the MAC layer, strict priority does not allow for fair, deterministic bandwidth control typically preferred 10

11 for all but the very highest priority traffic classes. This includes converged networks that handle block storage traffic using a layer 2 encapsulation protocol, like FCoE. One common misunderstanding about many modern Ethernet devices, particularly Ethernet switches, is that they already have bandwidth control and traffic shaping capabilities that support layer 2 protocols like FCoE. But these devices typically define traffic classes based on layer 3 (IP) or layer 4 information in frames, not by the priority field of the IEEE 802.1Q tag field or the Ethertype (protocol) field in the Ethernet frame header. The Enhanced Transmission Selection (ETS) standard formally defines how the port transmit logic of an Ethernet device selects the next frame to send from one or more priority/traffic class queues for layer 2, or MAC based, protocols. This lets the device allocate bandwidth between layer 2 defined traffic classes and support strict priority scheduling for traffic classes requiring it. ETS refines the existing TCs. ETS adds a bandwidth-sharing algorithm that you can assign to each of the supported TCs. When you configure a TC to use the ETS bandwidth-sharing algorithm, you must provide a bandwidth percentage. Traffic class queues that are part of TCs assigned a strict priority-scheduling algorithm (typically the default algorithm) are processed in strict priority order. They have three typical uses: Extremely high priority network control or management traffic Low-bandwidth/low-latency Jitter (variable latency) sensitive or intolerant The ETS standard specifies that once all the strict priority TC queues are empty, the device sends frames from the TCs assigned an ETS scheduling algorithm. A single ETS TC can have more than one priority queue. There is a common misconception about the ETS bandwidth-sharing algorithm. Some people think that the bandwidth percentage assigned to an ETS traffic class is a percentage of link bandwidth for the port. That is not true. ETS bandwidth percentages represent the percentage of available bandwidth after satisfying all of the strict priority TCs. That is, if the strict priority TCs take up 4 Gb/s of the link bandwidth of a 10 Gb/s link, the ETS queue assigned 50 percent bandwidth is asking for 50 percent of the remaining 6 Gb/s of the link bandwidth, or 3 Gb/s. The ETS standard does not specify the bandwidth allocation algorithm that DCB-enabled Ethernet devices must use to select frames from the TCs. Device vendors get to decide the best algorithms for their products. The standard does suggest that deficit weighted round robin (DWRR) and a handful of other algorithms would suffice. The ETS standard also does not specify the algorithm for selecting frames for transmit from multiple priority queues assigned to the same TC. The standard suggests that using a strict priority algorithm between these queues is one possibility. As Ethernet frames of varying priority queue up for transmission on a port, the device maps them into priority queues and traffic classes. The device then places the frames into independent priority or traffic class queues. Network administrators responsible for managing the port on the network device are responsible for configuring these assignments. The ETS standard specifies that these administrators are also responsible for assigning the scheduling algorithm for each traffic class. In Figure 7, priority 5 frames are in TC4, and priority 1 frames are in TC1. Strict priority was the scheduling algorithm for both TCs, so the device sends their frames before any frames of TCs assigned the ETS scheduling algorithm. In this case, the device sends frames for TC4 before any frames from TC1. If there are no frames in the queue for TC4, then the device sends frames in TC1 before any frames in any of the other TCs. TCs assigned with ETS scheduling (TC0, TC2, and TC3) have been allocated 50, 40, and 10 percent of the available bandwidth, respectively. These allocations are the percentage of bandwidth available after the transmit requirements of TC4 and TC1are satisfied. 11

12 Figure 7. Example of an Enhanced Transmission Selection (ETS) configuration Also in Figure 7, note that TC2 has priority queues 2 and 3. The ETS standard suggests that frames transmit from TC2 queues in strict priority order. In this example, the device sends any frames in the queue for priority 3 before any frames in the queue for priority 2. Again, the standard leaves the implementation of scheduling for these intra-tc queues to device vendors. Vendors might use the two traffic classes scheduled in strict priority order or in round robin. Some implementations may be configurable to allow either mode. The FCoE protocol requires DCB-enabled Ethernet devices to support at least two TCs that support ETS scheduling algorithms: one to support traditional data communication traffic and one to support FCoE traffic. Many devices on the market only support two TCs with ETS capability. In future generations of hardware, devices should support more TCs capable of ETS bandwidth scheduling, but this is not required for basic FCoE transport over DCB-enabled Ethernet links. Those who adopt of this technology must clearly understand another important aspect of ETS performance. ETS bandwidth allocation is merely the best effort specification of minimum bandwidth guarantee. Many factors can limit the effectiveness of a device to meet these bandwidth requirements consistently. The bandwidth consumed by the strict priority queues can directly affect the amount of bandwidth available for ETS traffic classes. When a port receives a per priority pause frame (PPP) from its link partner, all transmission from that traffic class or priority queue within the traffic class stops for the duration of the pause. This could dramatically reduce the effective throughput of that traffic class. Finally, implementing congestion notification can also affect the amount of data transmitted from a traffic class, but not as severely as PFC s effect on ETS. Quantized Congestion Notification The IEEE 802.1Qau standard specifies a protocol called Quantized Congestion Notification (QCN). The QCN protocol supports end-to-end flow control in large, multi-hop, DCB-enabled, switched Ethernet infrastructures. It is one of the most significant standards for enabling converged network deployments in moderate to large data centers. PFC protects against occasional bursty congestion on a single link between DCB-enabled devices. QCN protects larger multi-hop or end-to-end converged networks from persistent or chronic congestion. These multi-hop networks are susceptible to congestion because typical tree-like network architectures tend to have choke points where multiple sources of data compete for network resources and bandwidth to reach a smaller number of destinations. Typical shared storage traffic patterns especially compound this issue. QCN does not guarantee a lossless environment in the DCB-enabled LAN. You must 12

13 use QCN in conjunction with PFC to provide lossless operation with smooth congestion management across large DCB-enabled networks. QCN uses a special new tag that allows sources of traffic, for example CNAs, to identify traffic flows to all interconnect devices in a QCN-enabled DCB network. QCN defines two specific points in a network that implement the QCN protocol, congestion points and reaction points. The QCN protocol has these basic procedural elements: Reaction points initiate traffic into the network. They can include CNAs, target nodes, or DCB-enabled switches that bridge between native FC networks and the DCB-enabled Ethernet network. Reaction points tag their frames with traffic flow information identifying the source and destination of the traffic flow. When transmit queues fill up due to congestion from oversubscription, congestion points (typically switches) statistically sample the frames in the congested transmit queues to identify the traffic flows contributing most to the congestion. The congestion point device calculates congestion feedback quanta for each traffic source sampled. The device uses information from the sampled traffic flow tags to send congestion notifications back to the traffic sources. Upon receiving the congestion notification, a reaction point will use the feedback quanta to reduce the transmission rate for that traffic flow to that specific destination. QCN does not affect traffic sent on unrelated flows to unrelated destinations. If a reaction point receives no further congestion notification messages, it slowly increases its transmit rates until they reach normal levels. Most DCB-enabled Ethernet switches will implement congestion points. We can roughly equate QCN operation to the TCP window algorithms that restrict traffic flow when the device detects lost frames. In the case of QCN, however, the protocol operates at layer 2 in the network. It uses high-performance, low-level hardware to improve the network s ability to react to congestion. Figure 8 illustrates a multi-hop network that implements QCN. Figure 8. QCN congestion notification Congestion Points Reaction Points Storage Data Flow Congestion Notification Messages In this example, multiple CNAs in servers are sending write data to a common storage device through a multi-hop network. As a switch queue fills and surpasses a high water mark, the device sends congestion 13

14 notification messages to the server CNAs. The switch selects the server CAN by statistically sampling the congested queue. The congestion notification occurs dynamically by sending higher feedback quanta to CNAs producing the most traffic and lower feedback quanta to sources producing less traffic. As a result, the CNAs throttle down their transmit rates on congested traffic flows. The decrease in traffic flow rates reduces the number of frames in the congested queue in the switch to achieve a more sustainable, balanced level of performance. As the congestion eases, the switch reduces or stops sending notifications and the CNAs start to accelerate the throughput rate. This active feedback protocol continuously balances traffic flow. It is possible to construct simple converged networks on one or two switch hops without QCN. In fact, the FCoE protocol does not require use of QCN in DCB-enabled Ethernet equipment. However, the general understanding is that building relatively complex multi-hop or end-to-end, data-center-wide converged networks based on DCB-enabled Ethernet equipment requires enabling QCN in this infrastructure. Networks that use the QCN protocol face several challenges: QCN protocol complexity Implementing the flow tagging, statistical sampling, and congestion messaging is relatively complex. Identifying the proper timing and quanta of notification feedback to satisfy a wide variety of operating conditions is also difficult. Difficult interoperability process Perfecting multi-vendor interoperability could take several years because of protocol complexity. No QCN support in current generation products No DCB/FCoE products shipping today support the QCN protocol. Furthermore, most, if not all, products will require a hardware upgrade to support QCN. Products claiming to support QCN have unproven, untested hardware implementations. Vendors haven t performed any rigorous interoperability tests with production level QCN software. Complete end-to-end support requirement To enable QCN in a network, the entire data path must support the QCN protocol. All hardware across the DCB-enabled network must support QCN. This poses a significant problem because upgrading existing first-generation, DCB-based converged networks requires replacing or upgrading all DCB components. Because of these challenges, only one-hop and two-hop networks will be reliable until next generation hardware becomes available to support QCN. Most currently shipping hardware cannot support QCN and cannot be software upgraded to add this support. Therefore, support for larger DCB-based network deployments will require hardware upgrades. Data Center Bridging Exchange Data Center Bridging Exchange (DCBX) protocol provides two primary functions: Lets DCB-enabled Ethernet devices/ports advertise their DCB capabilities to their link partners Lets DCB-enabled Ethernet devices push preferred parameters to their link partners DCBX supports discovery and exchange of network configuration information between DCB-compliant peer devices. DCBX enhances the Link Layer Discovery Protocol (LLDP) with more network status information and more parameters than LLDP. The specification separates DCBX exchange parameters into administered and operational groups. The administered parameters contain network device configurations. The operational parameters describe the operational status of network device configurations. Devices can also specify a willingness to accept DCBX parameters from the attached link partner. This is most commonly supported in CNAs that allow the attached DCB-enabled switch to set up their parameters. 14

15 NOTE Link Layer Discovery Protocol (LLDP), IEEE 802.1AB, defines a protocol and a set of managed objects that can be used for discovering the physical topology and connection end-point information from adjacent devices in 802 LANs and MANs. The protocol is not restricted from running on non-802 media. Table 2. DCBX supported parameters Protocol PFC ETS QCN Other Parameters Advertised Indication of which priorities have PFC enabled Willingness to accept PFC recommendations (CNA) Number of priorities that can support PFC MACsec bypass capability Number of traffic classes supported on the port Priority to Traffic Class Mapping Willingness to accept ETS recommendations (CNA) Traffic class bandwidth allocations (for ETS TCs) Bandwidth allocation algorithms for each TC Not currently in the standard How applications, for example FCoE, map to priorities Figure 9 illustrates DCBX parameter negotiation between a CNA and the attached switch port where neither device is willing to accept DCBX parameter recommendations. In this case, the CNA and switch advertise DCB capabilities to each other. The adapter chooses a storage traffic priority that is not compatible with the switch. The CNA and switch cannot properly exchange storage traffic with one another so communication on that link does not happen. Typically, this generates an error that prompts you to reconfigure either the CNA or the switch parameters to make them compatible. The same situation can occur on links between switches. 15

16 Figure 9. DCBX static parameter exchange CNA parameters switch parameters X The DCBX protocol s strength lies in its ability to perform dynamic negotiation using attributes called recommended and willingness. CNAs and switches using DCBX can advertise their willingness to adopt parameter settings from their link partner. In the example shown in Figure 10, a CNA communicates the initial exchange of ETS and PFC information and willingness to consider parameters from the switch. The switch acknowledges this willingness and sends the CNA the recommended parameter values for ETS and PFC parameters. If the CNA can successfully adopt the recommended parameters, the CNA will readvertise its DCBX parameters using the recommended values. The two devices will then be able to communicate on the established link. 16

17 Figure 10. DCBX dynamic negotiation X CNA willingness Switch recommended DCB - C CNA new parameters Migrating to converged fabrics In a one-hop architecture, converged traffic goes from a server to a switch that splits it to Ethernet and Fibre Channel. In two-hop architecture, converged traffic goes to a second switch before the split. The more switch hops in a DCB-enabled network, the more difficult it is to keep the network operating at peak efficiency while minimizing congestion. Figure 11 shows the expected industry path to convergence. Figure 11. Industry path to convergence 17

18 This is the first phase of migration to converged fabrics. CNAs will connect to converged fabric access switches that support DCB-enabled Ethernet, legacy Ethernet, and legacy FC. The CNAs will provide converged connectivity between servers and the first hop switch before disaggregating the traffic to the legacy LAN and SAN infrastructure. Figure 12 compares traditional deployment to the first phase of converged network deployment. Figure 12. Comparison of traditional deployment and converged network, phase 1 Figure 13 shows how the next phases of deployment may occur as you update existing data centers or build new ones. Eventually a server will require only a single pair of redundant CNAs. Converged network switches will replace separate FC, 10 GbE, and IB switches. 18

19 Figure 13. Converged network deployment, phases 2 and 3 HP strategy We believe that the transition to DCB/FCoE can be graceful. It need not disrupt existing network infrastructures if you first deploy at the server-to-network edge and then migrate farther into the network. With this approach, you will gain the immediate benefit of reduced cable and adapter hardware with the least amount of disruption to the overall network architecture. As you deploy new servers, you can deploy DCB/FCoE with new CNAs and DCB/FCoE/FC enabled edge/access switches. Doing this will optimize, simplify, and reduce the cost of the server-to-network edge infrastructure, and you won t have to replace the entire data center communications infrastructure. You should start by implementing DCB/FCoE technology only with those servers requiring access to FC SAN storage targets. Many data centers average about 60 to 80 percent LAN-only network attachment, so only the remaining 20 to 40 percent would need both LAN and SAN. Not all servers need access to FC SANs. Looking forward, many IT organizations are re-evaluating the network storage connectivity of their server infrastructure. Besides DCB/FCoE technology, other methods of converging traffic include iscsi protocols with storage devices at 10 Gb, and file-oriented network storage protocols with storage such as NFS or CIFS. Neither of these technologies requires a DCB-enabled Ethernet network. Both can operate on traditional 1/10 Gb Ethernet infrastructure. Transitioning the server-to-network edge first to accommodate FCoE/CEE will maintain the existing architecture structure and management roles, keeping the existing SAN and LAN topologies. Updating the server-to-network edge offers the greatest benefit and simplification without disrupting the data center architecture. 19

20 For more information Resource description HP Multifunction Networking Products HP ProLiant networking Ethernet network adapters Server-to-network edge technologies: converged networks and virtual I/O technology brief Ethernet technology for industry-standard servers technology brief HP FlexFabric and Flex-10 technology technology brief Server virtualization technologies for x86- based HP BladeSystem and HP ProLiant servers technology brief HP Virtual Connect Technology web page Web address dex-nic.html al/c /c pdf al/c /c pdf al/c /c pdf al/c /c pdf connect/ Call to action Send comments about this paper to Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. TC101220TB, December 2010

Ethernet Fabric Requirements for FCoE in the Data Center

Ethernet Fabric Requirements for FCoE in the Data Center Ethernet Fabric Requirements for FCoE in the Data Center Gary Lee Director of Product Marketing glee@fulcrummicro.com February 2010 1 FCoE Market Overview FC networks are relatively high cost solutions

More information

FIBRE CHANNEL OVER ETHERNET

FIBRE CHANNEL OVER ETHERNET FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today ABSTRACT Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,

More information

Fibre Channel over Ethernet in the Data Center: An Introduction

Fibre Channel over Ethernet in the Data Center: An Introduction Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification

More information

Data Center Convergence. Ahmad Zamer, Brocade

Data Center Convergence. Ahmad Zamer, Brocade Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations

More information

Fibre Channel Over and Under

Fibre Channel Over and Under Fibre Channel over : A necessary infrastructure convergence By Deni Connor, principal analyst April 2008 Introduction Consolidation of IT datacenter infrastructure is happening in all forms. IT administrators

More information

3G Converged-NICs A Platform for Server I/O to Converged Networks

3G Converged-NICs A Platform for Server I/O to Converged Networks White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview

More information

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Unified Storage Networking

Unified Storage Networking Unified Storage Networking Dennis Martin President Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage infrastructure Fibre Channel:

More information

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network Converging Data Center Applications onto a Single 10Gb/s Ethernet Network Explanation of Ethernet Alliance Demonstration at SC10 Contributing Companies: Amphenol, Broadcom, Brocade, CommScope, Cisco, Dell,

More information

WHITE PAPER. Best Practices in Deploying Converged Data Centers

WHITE PAPER. Best Practices in Deploying Converged Data Centers WHITE PAPER Best Practices in Deploying Converged Data Centers www.ixiacom.com 915-2505-01 Rev C October 2013 2 Contents Introduction... 4 Converged Data Center... 4 Deployment Best Practices... 6 Testing

More information

How To Evaluate Netapp Ethernet Storage System For A Test Drive

How To Evaluate Netapp Ethernet Storage System For A Test Drive Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding

More information

Unified Fabric: Cisco's Innovation for Data Center Networks

Unified Fabric: Cisco's Innovation for Data Center Networks . White Paper Unified Fabric: Cisco's Innovation for Data Center Networks What You Will Learn Unified Fabric supports new concepts such as IEEE Data Center Bridging enhancements that improve the robustness

More information

Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco

Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco Cisco Datacenter 3.0 Datacenter Trends David Gonzalez Consulting Systems Engineer Cisco 2009 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Agenda Data Center Ethernet (DCE) Fiber Channel over

More information

Fiber Channel Over Ethernet (FCoE)

Fiber Channel Over Ethernet (FCoE) Fiber Channel Over Ethernet (FCoE) Using Intel Ethernet Switch Family White Paper November, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR

More information

ALCATEL-LUCENT ENTERPRISE AND QLOGIC PRESENT

ALCATEL-LUCENT ENTERPRISE AND QLOGIC PRESENT ALCATEL-LUCENT ENTERPRISE AND QLOGIC PRESENT A VIRTUALIZED AND CONVERGED NETWORK FOR ENTERPRISE DATACENTERS APPLICATION NOTE TABLE OF CONTENTS Introduction / 1 The case for Ethernet / 2 The components

More information

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...

More information

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

Dell Networking S5000: The Building Blocks of Unified Fabric and LAN/SAN Convergence. Technical Whitepaper

Dell Networking S5000: The Building Blocks of Unified Fabric and LAN/SAN Convergence. Technical Whitepaper Dell Networking S5000: The Building Blocks of Unified Fabric and LAN/SAN Convergence Dell Technical Marketing Data Center Networking May 2013 THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY CONTAIN

More information

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller

More information

Data Center Bridging Plugfest

Data Center Bridging Plugfest Data Center Bridging Plugfest November 2010 Page 1 Table of Contents 1 Introduction & Background Error! Bookmark not defined. 1.1 Introduction... 4 1.2 DCB Plugfest Objectives and Participants... 4 1.3

More information

Block based, file-based, combination. Component based, solution based

Block based, file-based, combination. Component based, solution based The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates

More information

Integrating 16Gb Fibre Channel and 10GbE Technologies with Current Infrastructure

Integrating 16Gb Fibre Channel and 10GbE Technologies with Current Infrastructure Integrating 16Gb Fibre Channel and 10GbE Technologies with Current Infrastructure Agenda Benefits of Converging Fibre Channel and Ethernet Why Separate Networks, Separate Technologies Enabling Convergence

More information

over Ethernet (FCoE) Dennis Martin President, Demartek

over Ethernet (FCoE) Dennis Martin President, Demartek A Practical Guide to Fibre Channel over Ethernet (FCoE) Dennis Martin President, Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage

More information

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the

More information

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,

More information

Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:

Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract: Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure Motti Beck Director, Marketing motti@mellanox.com Michael Kagan Chief Technology Officer michaelk@mellanox.com

More information

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized

More information

FCoE Enabled Network Consolidation in the Enterprise Data Center

FCoE Enabled Network Consolidation in the Enterprise Data Center White Paper FCoE Enabled Network Consolidation in the Enterprise Data Center A technology blueprint Executive Overview This white paper describes a blueprint for network consolidation in the enterprise

More information

BUILDING A NEXT-GENERATION DATA CENTER

BUILDING A NEXT-GENERATION DATA CENTER BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking

More information

Brocade One Data Center Cloud-Optimized Networks

Brocade One Data Center Cloud-Optimized Networks POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere

More information

HP iscsi storage for small and midsize businesses

HP iscsi storage for small and midsize businesses HP iscsi storage for small and midsize businesses IP SAN solution guide With data almost doubling in volume every year, businesses are discovering that they need to take a strategic approach to managing

More information

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c

More information

Introducing logical servers: Making data center infrastructures more adaptive

Introducing logical servers: Making data center infrastructures more adaptive Introducing logical servers: Making data center infrastructures more adaptive technology brief, 2 nd edition Abstract... 2 Introduction... 2 Overview of logical servers... 3 Why use logical servers?...

More information

IP SAN Best Practices

IP SAN Best Practices IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center

Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center Samir Sharma, Juniper Networks Author: Samir Sharma, Juniper Networks SNIA Legal Notice The material contained in this

More information

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA Overview Oracle s Fibre Channel over Ethernet (FCoE technology provides an opportunity to reduce data center costs by converging

More information

Fibre Channel over Ethernet: Enabling Server I/O Consolidation

Fibre Channel over Ethernet: Enabling Server I/O Consolidation WHITE PAPER Fibre Channel over Ethernet: Enabling Server I/O Consolidation Brocade is delivering industry-leading oe solutions for the data center with CNAs, top-of-rack switches, and end-of-row oe blades

More information

HP Virtual Connect. Tarass Vercešuks / 3 rd of October, 2013

HP Virtual Connect. Tarass Vercešuks / 3 rd of October, 2013 HP Virtual Connect Tarass Vercešuks / 3 rd of October, 2013 Trends Creating Data Center Network Challenges Trends 2 Challenges Virtualization Complexity Cloud Management Consumerization of IT Security

More information

Virtualizing the SAN with Software Defined Storage Networks

Virtualizing the SAN with Software Defined Storage Networks Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands

More information

FCoE Deployment in a Virtualized Data Center

FCoE Deployment in a Virtualized Data Center FCoE Deployment in a irtualized Data Center Satheesh Nanniyur (satheesh.nanniyur@qlogic.com) Sr. Staff Product arketing anager QLogic Corporation All opinions expressed in this presentation are that of

More information

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over)

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over) Extreme Networks White Paper Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over) The evolution of the data center fabric has been well documented. The

More information

Data Center Evolution without Revolution

Data Center Evolution without Revolution WHITE PAPER www.brocade.com DATA CENTER Data Center Evolution without Revolution Brocade networking solutions help organizations transition smoothly to a world where information and applications can reside

More information

A Platform Built for Server Virtualization: Cisco Unified Computing System

A Platform Built for Server Virtualization: Cisco Unified Computing System A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease

More information

FCoCEE* Enterprise Data Center Use Cases

FCoCEE* Enterprise Data Center Use Cases ocee* Enterprise Data Center Use Cases Dan Eisenhauer, IBM over CEE Development Renato Recio, DE, IBM Data Center Networking CTO *Fibre Channel over Convergence Enhanced The Data Center is Undergoing Transition

More information

hp ProLiant network adapter teaming

hp ProLiant network adapter teaming hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2

More information

Redundancy in enterprise storage networks using dual-domain SAS configurations

Redundancy in enterprise storage networks using dual-domain SAS configurations Redundancy in enterprise storage networks using dual-domain SAS configurations technology brief Abstract... 2 Introduction... 2 Why dual-domain SAS is important... 2 Single SAS domain... 3 Dual-domain

More information

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol

More information

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment Introduction... 2 Virtualization addresses key challenges facing IT today... 2 Introducing Virtuozzo... 2 A virtualized environment

More information

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain

More information

Ethernet, and FCoE Are the Starting Points for True Network Convergence

Ethernet, and FCoE Are the Starting Points for True Network Convergence WHITE PAPER Opportunities and Challenges with the Convergence of Data Center Networks 10GbE, Standards-Based DCB, Low Latency Ethernet, and FCoE Are the Starting Points for True Network Convergence Copyright

More information

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch A Whitepaper on Building Data Centers with Dell MXL Blade Switch Product Management Dell Networking October 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS

More information

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters WHITE PAPER Intel Ethernet 10 Gigabit Server Adapters vsphere* 4 Simplify vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters Today s Intel Ethernet 10 Gigabit Server Adapters can greatly

More information

Enterasys Data Center Fabric

Enterasys Data Center Fabric TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed

More information

HBA Virtualization Technologies for Windows OS Environments

HBA Virtualization Technologies for Windows OS Environments HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software

More information

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL

More information

How To Design A Data Centre

How To Design A Data Centre DATA CENTRE TECHNOLOGIES & SERVICES RE-Solution Data Ltd Reach Recruit Resolve Refine 170 Greenford Road Harrow Middlesex HA1 3QX T +44 (0) 8450 031323 EXECUTIVE SUMMARY The purpose of a data centre is

More information

Data Center Evolution and Network Convergence. Joseph L White, Juniper Networks

Data Center Evolution and Network Convergence. Joseph L White, Juniper Networks Joseph L White, Juniper Networks SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material

More information

Storage Networking Foundations Certification Workshop

Storage Networking Foundations Certification Workshop Storage Networking Foundations Certification Workshop Duration: 2 Days Type: Lecture Course Description / Overview / Expected Outcome A group of students was asked recently to define a "SAN." Some replies

More information

Building Enterprise-Class Storage Using 40GbE

Building Enterprise-Class Storage Using 40GbE Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance

More information

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Connect Executive Overview This white paper describes how Cisco VFrame Server Fabric ization Software works with IBM BladeCenter H to provide

More information

A Dell Technical White Paper Dell PowerConnect Team

A Dell Technical White Paper Dell PowerConnect Team Flow Control and Network Performance A Dell Technical White Paper Dell PowerConnect Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

How To Write An Article On An Hp Appsystem For Spera Hana

How To Write An Article On An Hp Appsystem For Spera Hana Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ

More information

HP Converged Infrastructure Solutions

HP Converged Infrastructure Solutions HP Converged Infrastructure Solutions HP Virtual Connect and HP StorageWorks Simple SAN Connection Manager Enterprise Software Solution brief Executive summary Whether it is with VMware vsphere, Microsoft

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

Virtual networking technologies at the server-network edge

Virtual networking technologies at the server-network edge Virtual networking technologies at the server-network edge Technology brief Introduction... 2 Virtual Ethernet Bridges... 2 Software-based VEBs Virtual Switches... 2 Hardware VEBs SR-IOV enabled NICs...

More information

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used

More information

Joint ITU-T/IEEE Workshop on Carrier-class Ethernet

Joint ITU-T/IEEE Workshop on Carrier-class Ethernet Joint ITU-T/IEEE Workshop on Carrier-class Ethernet Quality of Service for unbounded data streams Reactive Congestion Management (proposals considered in IEE802.1Qau) Hugh Barrass (Cisco) 1 IEEE 802.1Qau

More information

Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp

Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp Server and Consolidation with iscsi Arrays David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individual members may use this

More information

IP videoconferencing solution with ProCurve switches and Tandberg terminals

IP videoconferencing solution with ProCurve switches and Tandberg terminals An HP ProCurve Networking Application Note IP videoconferencing solution with ProCurve switches and Tandberg terminals Contents 1. Introduction... 3 2. Architecture... 3 3. Videoconferencing traffic and

More information

Converged block storage solution with HP StorageWorks MPX200 Multifunction Router and EVA

Converged block storage solution with HP StorageWorks MPX200 Multifunction Router and EVA Converged block storage solution with HP StorageWorks MPX200 Multifunction Router and EVA Business white paper MO E V to the storage infrastructure of the future. Table of contents Executive summary...3

More information

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center info@globalknowledge.net www.globalknowledge.net Planning for the Redeployment of

More information

ethernet alliance Data Center Bridging Version 1.0 November 2008 Authors: Steve Garrison, Force10 Networks Val Oliva, Foundry Networks

ethernet alliance Data Center Bridging Version 1.0 November 2008 Authors: Steve Garrison, Force10 Networks Val Oliva, Foundry Networks Data Center Bridging Version 1.0 November 2008 Authors: Steve Garrison, Force10 Networks Val Oliva, Foundry Networks Gary Lee, Fulcrum Microsystems Robert Hays, Intel Ethernet Alliance 3855 SW 153 Drive

More information

Data Center Evolution and Network Convergence

Data Center Evolution and Network Convergence Joseph L White, Juniper Networks Author: Joseph L White, Juniper Networks SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and

More information

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage Flash memory has been used to transform consumer devices such as smartphones, tablets, and ultranotebooks, and now it

More information

Oracle Big Data Appliance: Datacenter Network Integration

Oracle Big Data Appliance: Datacenter Network Integration An Oracle White Paper May 2012 Oracle Big Data Appliance: Datacenter Network Integration Disclaimer The following is intended to outline our general product direction. It is intended for information purposes

More information

HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios

HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios Part number 603028-003 Third edition August 2010 Copyright 2009,2010 Hewlett-Packard Development Company, L.P.

More information

How To Use The Cisco Mds F Bladecenter Switch For Ibi Bladecenter (Ibi) For Aaa2 (Ibib) With A 4G) And 4G (Ibb) Network (Ibm) For Anaa

How To Use The Cisco Mds F Bladecenter Switch For Ibi Bladecenter (Ibi) For Aaa2 (Ibib) With A 4G) And 4G (Ibb) Network (Ibm) For Anaa Cisco MDS FC Bladeswitch for IBM BladeCenter Technical Overview Extending Cisco MDS 9000 Family Intelligent Storage Area Network Services to the Server Edge Cisco MDS FC Bladeswitch for IBM BladeCenter

More information

N_Port ID Virtualization

N_Port ID Virtualization A Detailed Review Abstract This white paper provides a consolidated study on the (NPIV) feature and usage in different platforms and on NPIV integration with the EMC PowerPath on AIX platform. February

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led

Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led Course Description Configuring Cisco Nexus 5000 Switches (DCNX5K) v2.1 is a 5-day ILT training program that is designed

More information

Fibre Forward - Why Storage Infrastructures Should Be Built With Fibre Channel

Fibre Forward - Why Storage Infrastructures Should Be Built With Fibre Channel Fibre Forward - Why Storage Infrastructures Should Be Built With Fibre Channel Prepared by: George Crump, Lead Analyst Prepared: June 2014 Fibre Forward - Why Storage Infrastructures Should Be Built With

More information

OPPORTUNITIES AND CHALLENGES WITH THE CONVERGENCE OF DATA CENTER NETWORKS

OPPORTUNITIES AND CHALLENGES WITH THE CONVERGENCE OF DATA CENTER NETWORKS WHITE PAPER OPPORTUNITIES AND CHALLENGES WITH THE CONVERGENCE OF DATA CENTER NETWORKS Juniper Networks is Well Positioned to Deliver a Superior Network Fabric for Next Generation Data Centers Copyright

More information

Opportunities and Challenges with the

Opportunities and Challenges with the WHITE PAPER Opportunities and Challenges with the Convergence of Data Center Networks Juniper Networks is Well Positioned to Deliver a Superior Network Fabric for Next Generation Data Centers Copyright

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring DCBX Application Protocol TLV Exchange Published: 2014-01-10 Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

10GBASE T for Broad 10_Gigabit Adoption in the Data Center 10GBASE T for Broad 10_Gigabit Adoption in the Data Center Contributors Carl G. Hansen, Intel Carrie Higbie, Siemon Yinglin (Frank) Yang, Commscope, Inc 1 Table of Contents 10Gigabit Ethernet: Drivers

More information

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013 Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013 Overview Oracle Virtual Networking revolutionizes data center economics by creating an agile, highly efficient infrastructure

More information

Ethernet Fabrics: An Architecture for Cloud Networking

Ethernet Fabrics: An Architecture for Cloud Networking WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic

More information

Accelerating Development and Troubleshooting of Data Center Bridging (DCB) Protocols Using Xgig

Accelerating Development and Troubleshooting of Data Center Bridging (DCB) Protocols Using Xgig White Paper Accelerating Development and Troubleshooting of The new Data Center Bridging (DCB) protocols provide important mechanisms for enabling priority and managing bandwidth allocations between different

More information

Technical Overview of Data Center Networks Joseph L White, Juniper Networks

Technical Overview of Data Center Networks Joseph L White, Juniper Networks Joseph L White, Juniper Networks SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material

More information

CompTIA Storage+ Powered by SNIA

CompTIA Storage+ Powered by SNIA CompTIA Storage+ Powered by SNIA http://www.snia.org/education/courses/training_tc Course Length: 4 days 9AM 5PM Course Fee: $2,495 USD Register: https://www.regonline.com/register/checkin.aspx?eventid=635346

More information

Next Generation Data Center Networking.

Next Generation Data Center Networking. Next Generation Data Center Networking. Intelligent Information Network. עמי בן-עמרם, יועץ להנדסת מערכות amib@cisco.comcom Cisco Israel. 1 Transparency in the Eye of the Beholder With virtualization, s

More information

VERITAS Backup Exec 9.0 for Windows Servers

VERITAS Backup Exec 9.0 for Windows Servers WHITE PAPER Data Protection Solutions for Network Attached Storage VERITAS Backup Exec 9.0 for Windows Servers VERSION INCLUDES TABLE OF CONTENTS STYLES 1 TABLE OF CONTENTS Background...3 Why Use a NAS

More information

List of Figures and Tables

List of Figures and Tables List of Figures and Tables FIGURES 1.1 Server-Centric IT architecture 2 1.2 Inflexible allocation of free storage capacity 3 1.3 Storage-Centric IT architecture 4 1.4 Server upgrade: preparation of a new

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series . White Paper Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series What You Will Learn The Cisco Nexus family of products gives data center designers the opportunity

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

The evolution of Data Center networking technologies

The evolution of Data Center networking technologies 0 First International Conference on Data Compression, Communications and Processing The evolution of Data Center networking technologies Antonio Scarfò Maticmind SpA Naples, Italy ascarfo@maticmind.it

More information

Customer Education Services Course Overview

Customer Education Services Course Overview Customer Education Services Course Overview Accelerated SAN Essentials (UC434S) This five-day course provides a comprehensive and accelerated understanding of SAN technologies and concepts. Students will

More information