WHITE PAPER OE CVERGENCE AT THE ACCESS LAYER WITH JUNIPER NETWORKS QFX3500 SWITCH First Top-of-Rack Switch Built to Solve All the Challenges Posed by Access-Layer Convergence Copyright 2011, Juniper Networks, Inc. 1
Table of Contents Executive Summary........................................................................................................ 3 Introduction................................................................................................................ 5 Access-Layer Convergence Modes.......................................................................................... 5 Option 1: oe Transit Switch ( Switch with FIP Snooping)............................................................. 6 oe Servers with CNA................................................................................................... 6 Option 2: oe- Gateway (Using NPIV Proxy)............................................................................ 7 Option 3: oe- Switch (Full F) (Not Recommended)................................................................. 8 Deployment Models Available Today........................................................................................ 8 Rack-Mount Servers and Top-of-Rack oe- Gateway................................................................. 8 Blade Servers with Pass-Through Modules and Top-of-Rack oe- Gateway........................................... 9 Blade Servers with Embedded Switch and Top-of-Rack oe- Gateway......................................... 10 Blade Servers with Embedded oe- Gateway....................................................................... 10 Servers Connected Through oe Transit Switch to an oe-enabled Fibre Channel SAN Fabric......................... 10 The Standards that Allow for Server I/O and Access-Layer Convergence.................................................... 11 Enhancements to Ethernet for Converged Data Center Networks................................................... 11 Enhancements to Fibre Channel for Converged Data Center Networks oe............................................ 13 Future Direction for oe.................................................................................................. 13 A Brief Note on iscsi...................................................................................................... 14 Conclusion................................................................................................................ 14 About Juniper Networks................................................................................................... 14 Table of Figures Figure 1: The phases of convergence, from separate networks, to access layer convergence, to the fully converged network... 4 Figure 2: Operation oe transit switch vs. oe- gateway............................................................... 5 Figure 3: Operation of an oe transit switch................................................................................ 6 Figure 4: oe servers with CNA............................................................................................ 7 Figure 5: Rack-mount servers and top-of-rack oe- gateway............................................................ 9 Figure 6: Blade servers with pass-through modules and top-of-rack oe- gateway....................................... 9 Figure 7: Blade servers with embedded switch and top-of-rack oe- gateway..................................... 10 Figure 8: Servers connected to oe transit switch through to an oe-enabled SAN fabric.............................. 11 Figure 9: ETS and QCN................................................................................................ 12 2 Copyright 2011, Juniper Networks, Inc.
Executive Summary In 2011, customers will finally be able to invest in convergence-enabling equipment and begin reaping the benefits of convergence in their data centers. With the first wave of standards now complete both the IEEE Data Center Bridging () enhancements to Ethernet and the InterNational Committee for Information Technology Standards (INCITS) T11 -BB-5 standard for Fibre Channel over Ethernet (oe), enterprises can benefit from server- and access-layer I/O convergence while continuing to leverage their investment in their existing aggregation, core LAN, and Fibre Channel () backbones. So why the focus on server and access-layer I/O convergence? Simply put, the industry recognizes that the first wave of standards does not meet the needs of full convergence and so it is working on a second wave of standards including - BB-6 as well as various forms of fabric technology to better address the challenges of full convergence. The new standards are designed to provide lower cost convergence strategies for the smaller enterprise, and to address the scaling issues that come about from convergence in general as well as increased data center scale. As a result, 2011 is the year to focus on the benefits to be gained from converging the access layer while laying a foundation for the future. Juniper Networks QFX3500 Switch is the first top-of-rack switch built to solve all of the challenges posed by access-layer convergence. It works for both rack-mount and blade servers, and for organizations with combined or separate LAN and storage area network (SAN) teams. It is also the first product to leverage a new generation of ASIC technologies. It offers 1.28 terabits per second (Tbps) of bandwidth implemented with a single ultra-low latency ASIC and soft-programmable ports capable of gigabit Ethernet (GbE), 10GbE, 40GbE, and 2/4/8 Gbps, supported through small form-factor pluggable transceiver (SFP+) GbE copper, 10GbE copper and optical digital to analog converter (DAC), and quad small form-factor pluggable (QSFP) dense optical connectivity. Copyright 2011, Juniper Networks, Inc. 3
SAN A SAN B Phase 1: Separate Networks SAN A SAN B Phase 2: Access Layer Convergence Phase 3: Full Convergence Figure 1: The phases of convergence, from separate networks, to access layer convergence, to the fully converged network. 4 Copyright 2011, Juniper Networks, Inc.
Introduction The network is the critical enabler of all services delivered from the data center. A simple, streamlined, and scalable data center network fabric can deliver greater efficiency and productivity, as well as lower operating costs. Such a network also allows the data center to support much higher levels of business agility and not become a bottleneck that hinders a company from releasing new products or services. To allow businesses to make sound investment decisions, this white paper will look at the following areas to fully clarify the most interesting options for convergence in 2011: 1. Review the different types of convergence-capable products that are available on the market based upon the current standards and consider the capabilities of those products 2. Consider the deployment scenarios for those products 3. Look forward to some of the new product and solution capabilities expected over the next couple of years Access-Layer Convergence Modes When buying a convergence platform, it is possible to deploy products based on three very different modes of operation. Products on the market today may be capable of one or more of these modes depending on hardware and software configuration and license enablement. oe transit switch switch with oe Initialization Protocol (FIP) snooping oe- gateway using N_ ID Virtualization (NPIV) proxy oe- switch full Fibre Channel Forwarder (F) capability In principle, these systems can be used in multiple places within a deployment. However, for the purpose of this document and based on the most likely deployments in 2011, only the server access-layer convergence model will be covered. OE Transit Switch vs. OE- Gateway /oe Switch Switch VF_ VF_ VF_ F_ F_ N_ N_ oe Transit Switch FIP Snooping NPIV Proxy FIP ACL FIP ACL FIP ACL VF_ VF_ VF_ VN_ VN_ VN_ VN_ VN_ VN_ oe servers with CNA oe servers with CNA Figure 2: Operation oe transit switch vs. oe- gateway Copyright 2011, Juniper Networks, Inc. 5
Option 1: oe Transit Switch ( Switch with FIP Snooping) In this model, the SAN team enables their backbone SAN fabric for oe, while the network team deploys a top-of-rack switch with FIP snooping. Servers are deployed with Converged Network Adapters (CNAs), and blade servers are deployed with pass-through modules or embedded switches. These are connected to the top-of-rack switch, which then has Ethernet connectivity to the LAN aggregation layer and Ethernet connectivity to the oe ports of the SAN backbone. A common question at this point is whether a switch with no Fibre Channel stack can indeed be a viable part of a converged deployment and, in particular, whether such a switch gives not just the necessary security but also the performance and manageability required in a storage network deployment. Since this is, at one level, just a Layer 2 switch, this solution ensures that the switch in each server rack is not consuming an domain ID. Fibre Channel networks have a scale restriction that limits them to just a couple of tens of switches. As convergence and 10GbE forces a move towards top-of-rack switches, any solution deployed must ensure that convergence does not cause an SAN scaling problem. oe LAG LAG oe Enabled SAN oe oe Figure 3: Operation of an oe transit switch oe Servers with CNA A rich implementation of an oe transit switch will provide strong management and monitoring of the traffic separation, allowing the SAN team to monitor oe traffic throughput. Specifically, a fully manageable switch will allow the user to monitor traffic on a per user priority and per priority group basis and not just per port. FIP snooping as defined in the oe standard provides perimeter protection, ensuring that the presence of an Ethernet layer in no way impacts existing SAN security. The SAN backbone can be simply oe-enabled with either oe blades within chassis-based systems or oe- gateways connected to the edge of the SAN backbone. In addition, the traditional Fibre Channel Security Profile (-SP) mechanisms work seamlessly through oe, allowing CNA-to-F authentication to be used through the switch. Perhaps less obviously, FIP snooping also means that the switch has a very clear view of each and every oe session that is running through it, both in terms of the path, which is derived from the source and destination media access control (MAC) frames of the virtual Fibre Channel ports, as well as the actual status of the virtual connection, which is monitored by snooping the FIP keepalive. 6 Copyright 2011, Juniper Networks, Inc.
Just as with any Ethernet deployment, the switch can use link aggregation group (LAG) to balance the Ethernet packets (including oe) across multiple links. As with any switch, this load balancing can include the OxID (Fibre Channel exchange ID) in order to carry out the Fibre Channel best practice of exchange-based load balancing. Finally, the oe protocol includes oe load-balancing capabilities to ensure that the oe servers are evenly and appropriately distributed across the multiple oe fabric connections. oe transit switches have several advantages: Low-cost top-of-rack switch Rich monitoring of oe traffic at top of rack (QFX3500 Switch) oe enablement of SAN backbone (oe blades or oe- gateway) managed by the SAN team for clean management separation Load balancing carried out between CNAs and oe ports of the SAN fabric as well as point-to-point throughout the Ethernet infrastructure Comprehensive security maintained through FIP snooping and -SP No heterogeneous support issues, as top of rack is L2 connectivity only SAN LAG LAG oe Figure 4: oe servers with CNA Option 2: oe- Gateway (Using NPIV Proxy) In this model, the SAN and Ethernet teams agree jointly to deploy an oe- top-of-rack gateway. From a cabling perspective, the deployment is identical to Option 1, with the most visible difference being that the cable between the top of rack and the SAN backbone is now carrying native Fibre Channel traffic rather than oe traffic. As with Option 1, this solution ensures that the switch in each server rack is not consuming an domain ID. In this case, however, unlike Option 1, a much richer level of Fibre Channel functionality has been enabled within the switch. The oe- gateway uses NPIV technology so that it presents to the servers as an oe-enabled Fibre Channel switch, and presents to the SAN backbone as a group of servers. It then simply proxies sessions from one domain to the other with intelligent load-balancing and automated failover capability across the Fibre Channel links to the fabric. Copyright 2011, Juniper Networks, Inc. 7
oe- gateways have several advantages: Clean separation of management through role-based access control (QFX3500 Switch) No need for oe enablement of the SAN backbone Fine-grained oe session-based load balancing (at the virtual machine level for NPIV-enabled hypervisors QFX3500 Switch) and full Ethernet LAG with exchange-based load balancing on the Ethernet-facing connectivity No heterogeneous support issues, as the oe- gateway presents to the SAN fabric as a Fibre Channel-enabled server (N_ to F_) Available post deployment as a license upgrade and fungible port reconfiguration with no additional hardware (QFX3500 Switch) Support for an upstream switch such as an embedded switch in blade server shelf (QFX3500 Switch), as well as direct CNA connectivity or connectivity via blade server pass-through modules Option 3: oe- Switch (Full F) (Not Recommended) For deployments of any size, there is no value to local switching, as any rack is either pure server or pure storage. In addition, although the SAN standards limit deployments to 239 switches, the practical supported limits are typically within the 16 to 32 range (in reality, most deployments are kept well below these limits). As such, this option has limited value in production data centers. For very small configurations where a single switch needs to connect to both servers and storage, Juniper believes that Internet Small Computer System Interface (iscsi) is the best approach in 2011, while the -BB-6 VN2VN model (see Future Direction for oe section later in this white paper) will be the preferred oe end-to-end model in 2012. Deployment Models Available Today As previously noted, this paper focuses on deployments that apply for server access layer convergence. As such, it is assumed that this access layer is in turn connecting both to some form of Ethernet aggregation/core layer on one side and a Fibre Channel backbone on the other. The term Fibre Channel backbone implies a traditional SAN of some form which has attached to the disk and tape as well as most likely existing servers. By leveraging either an oe transit switch or an oe- gateway, whether separately or together, there are a number of deployment options for supporting both rack-mount servers and blade servers. Each approach has its merits, and organizations may want to use different approaches, depending on their requirements. In terms of physical deployment in most data centers, the Ethernet aggregation and core, the backbone, and the disk and tape are likely to be colocated in some centralized location within the data center with the server racks. From a cabling perspective, this means that the same physical cable infrastructure can easily support any of the deployment models discussed below. Rack-Mount Servers and Top-of-Rack oe- Gateway This deployment model is perhaps the most recognized and best understood. The QFX3500 Switch fully supports this model and, unlike other products, the QFX3500 enables this mode through a single license that allows up to 12 of its 48 SFP+ ports to be configured for 2/4/8 Gbps instead of 10GbE. 8 Copyright 2011, Juniper Networks, Inc.
Ethernet Network SAN Ethernet Figure 5: Rack-mount servers and top-of-rack oe- gateway Blade Servers with Pass-Through Modules and Top-of-Rack oe- Gateway This model is similar to the previous rack-mount servers and top-of-rack oe- gateway model. The challenge with this model is the complex cabling that accompanies pass-through modules. Using pass-through has the benefit of removing an entire layer from the network topology, thereby simplifying the data center, ensuring a single network operating system at all layers, and allowing the edge of the network to leverage the richer functionality available with the feature-rich ASICs used at top of rack. The use of modern pass-through modules and well constructed cabling solutions provide all the cable simplicity benefits of an embedded blade switch with none of the limitations. TOR oe- Gateway Pass-Through Module Figure 6: Blade servers with pass-through modules and top-of-rack oe- gateway. Copyright 2011, Juniper Networks, Inc. 9
Blade Servers with Embedded Switch and Top-of-Rack oe- Gateway To support this deployment model, it is necessary to ensure that both the CNAs and the oe- gateway have particularly feature-rich implementations of the full -BB-5 standard in order to support many-to-many L2 visibility for fan-in load balancing and high availability. The QFX3500 Switch is the first fully -BB-5-enabled gateway capable of easily supporting upstream switches, including third-party embedded blade shelf switches. Juniper strongly recommends using such switches only if they have implemented FIP snooping for perimeter detection, and they have fully standards-based, feature-rich implementations. When deploying a switch in between the servers and the gateway, an Ethernet LAG is formed between the two devices, providing optimum packet distribution. In the case of the QFX3500 Switch, the Fibre Channel OxID is included in the LAG, ensuring exchange-based load balancing across the link. Additionally, for enhanced scaling, the ports of the QFX3500 can be configured in a trusted mode where it is known that there is an upstream switch with FIP snooping. Increasingly, however, this option is seen as undesirable, as it adds an additional network tier and makes it hard to standardize the network access layer in a multivendor server environment. SAN oe Figure 7: Blade servers with embedded switch and top-of-rack oe- gateway Blade Servers with Embedded oe- Gateway Typically, embedded switches have a limited power and heat budget, so the simpler the module the better. There is also an issue with limited space for port connections. With a gateway, some of these ports must be Ethernet and some must be Fibre Channel, further restricting the available bandwidth in both cases. In addition, such modules are not commonly available for all blade server families, making the deployment of a standard and consistent infrastructure challenging. Overall, these issues make this is an undesirable use case. Servers Connected Through oe Transit Switch to an oe-enabled Fibre Channel SAN Fabric There is an interesting case for using oe transit switches as the access layer connecting both to Ethernet aggregation and to an oe-enabled Fibre Channel SAN fabric. An oe- gateway has to be actively managed and monitored by both the SAN and LAN teams a considerable challenge for some organizations. An oe transit switch is not active at the oe layer of the protocol stack, so there is nothing for the SAN team to actively manage. Therefore, while the SAN team would still need monitoring capabilities, there is no active overlap of management, and this minimizes the possibility of configuration mistakes by different groups. 10 Copyright 2011, Juniper Networks, Inc.
There are various ways to enable the SAN fabric. One model is to include some oe-enabled switches within the SAN fabric; this can be accomplished by adding an oe blade to one of the chassis-based SAN directors. As with the previous use cases, the oe ports deployed on the SAN fabric must support multiple virtual fabric ports per physical port for this deployment to be viable. Another option is to use the QFX3500 Switch configured as an oe- gateway, which is connected locally to a pure SAN fabric and administered by the SAN team. For larger customers, where the merging of LAN and SAN network teams is unlikely to happen for several years, this provides a very clean and simply converged deployment model. Managed by SAN Team Managed by SAN Team SAN SAN Managed by LAN Team oe oe Managed by Server Team Figure 8: Servers connected to oe transit switch through to an oe-enabled SAN fabric The Standards that Allow for Server I/O and Access-Layer Convergence Enhancements to Ethernet for Converged Data Center Networks Ethernet, originally developed to handle traffic using a best-effort delivery approach, has mechanisms to support lossless traffic through 802.3X Pause, but these are rarely deployed. When used in a converged network, Pause frames can lead to cross-traffic blocking and congestion. Ethernet also has mechanisms to support fine-grained queuing (user priorities), but again, these are rarely deployed within the data center. The next logical step for Ethernet will be to leverage these capabilities and enhance existing standards to meet the needs of convergence and virtualization, propelling Ethernet into the forefront as the preeminent infrastructure for LANs, SANs, and high-performance computing (HPC) clusters. These enhancements benefit Ethernet I/O convergence (remembering that most servers have multiple 1GbE network interface cards not for bandwidth but to support multiple network services), and existing Ethernet- and IP-based storage protocols such as network access server (NAS) and iscsi. These enhancements also provide the appropriate platform for supporting oe. In the early days when these standards were being developed and before they moved under the auspices of the IEEE, the term Converged Enhanced Ethernet (CEE) was used to identify them. Copyright 2011, Juniper Networks, Inc. 11
a set of IEEE standards. Ethernet needed a variety of enhancements to support I/O, network convergence, and server virtualization. Server virtualization is covered in other Juniper white papers, even though it is part of the protocol set. With respect to I/O and network convergence, the development of new standards began with the following existing standards: 1. User Priority for Class of Service 802.1p which already allows identification of eight separate lanes of traffic (used as-is) 2. Ethernet Flow Control (Pause, symmetric, and/or asymmetric flow control) 802.3X which is leveraged for priority flow control () 3. MAC Control Frame for 802.3bd to allow 802.3X to apply to individual user priorities (modified) A number of new standards that leverage these components have been developed and have either been formally approved or are in the final stages of the approval process. These include: 1. IEEE 802.1Qbb which applies traditional 802.3X Pause to individual priorities instead of the port 2. Enhanced Transmission Selection (ETS) IEEE 802.1Qaz which is a grouping of priorities and bandwidth allocation to those groups 3. Ethernet Congestion Management (QCN) IEEE 802.1Qau which is a cross network as opposed to a point-to-point backpressure mechanism 4. Data Center Bridging Exchange Protocol (x), part of the ETS standard for auto-negotiation The final versions of the standards specify minimum requirements for compliance, detail the maximum in terms of external requirements, and also describe in some detail the options for implementing internal behavior and the downside of some lower cost but standards-compliant ways of implementing. It is important to note that these standards are separate from the efforts to solve the Layer 2 multipathing issues that are not technically necessary to make convergence work. Also, neither these standards nor those around L2 multipathing address a number of other challenges that arise when networks are converged and flattened. TX Queue 0 RX Buffer 0 RX Buffer 0 TX Queue 0 Physical OFF OFF OFF TX Queue 1 RX Buffer 1 TX Queue 2 RX Buffer 2 TX Queue 3 RX Buffer 3 TX Queue 4 RX Buffer 4 TX Queue 5 RX Buffer 5 TX Queue 6 RX Buffer 6 S T O P Keeps sending pause DROP RX Buffer 1 TX Queue 1 RX Buffer 2 TX Queue 2 RX Buffer 3 TX Queue 3 RX Buffer 4 TX Queue 4 RX Buffer 5 TX Queue 5 RX Buffer 6 TX Queue 6 OFF OFF OFF Physical TX Queue 7 RX Buffer 7 RX Buffer 7 TX Queue 7 Physical ETS Class Group 1 Class Group 2 Class Group 3 TX Queue 0 TX Queue 1 TX Queue 2 TX Queue 3 TX Queue 4 TX Queue 5 TX Queue 6 TX Queue 7 1 2 3 2 6 5 2 4 3 1 2 2 2 5 5 2 3 3 T1 T2 T3 T1 T2 T3 Offered Traffic Realized Traffic Figure 9: ETS and QCN 12 Copyright 2011, Juniper Networks, Inc.
Enhancements to Fibre Channel for Converged Data Center Networks oe oe the protocol developed within T11. The proposed oe protocol has been developed by the T11 Technical Committee a subgroup of the International Committee for Information Technology Standards (INCITS) as part of the Fibre Channel Backbone 5 (-BB-5) project. The standard was passed over to INCITS for public comment and final ratification in 2009, and has since been formerly ratified. In 2009, T11 started development work on Fibre Backbone 6 (-BB-6), which is intended to address a number of issues not covered in the first standard, and develop a number of new deployment scenarios. oe was designed to allow organizations to move to Ethernet-based storage while, at least in theory, minimizing the cost of change. To the storage world, oe is, in many ways, just with a new physical media type; many of the tools and services remain the same. To the Ethernet world, oe is just another upper level protocol riding over Ethernet. The -BB-5 standard clearly defines all of the details involved in mapping through an Ethernet layer whether directly or through simplified L2 connectivity. It lays out both the responsibilities of the oe-enabled endpoints and fabrics as well as of the Ethernet layer. Finally, it clearly states the additional security mechanisms that are recommended to maintain the level of security that a physically separate SAN traditionally provides. Overall, apart from the scale-up and scale-down aspects, -BB-5 defines everything needed to build and support the products and solutions discussed earlier. While the development of oe as an industry standard will bring the deployment of unified data center infrastructures closer to reality, oe by itself is not enough to complete the necessary convergence. Many additional enhancements to Ethernet and changes to the way networking products are designed and deployed are required to make it a viable, useful, and pragmatic implementation. Many, though not all, of the additional enhancements are provided by the standards developed through the IEEE committee. In theory, the combination of the and oe standards allows for full network convergence. In reality, they only solve the problem for relatively small-scale data centers. The challenge of applying these techniques to larger deployments involves the use of these protocols purely for server- and access-layer I/O convergence through the use of oe transit switches ( switches with FIP snooping) and oe- gateways (using N_ ID Virtualization to eliminate SAN scaling and heterogeneous support issues). Juniper Networks EX4500 Ethernet Switch and QFX3500 Switch both support an oe transit switch mode. The QFX3500 also supports oe- gateway mode. These products are industry firsts in many ways: 1. The EX4500 and QFX3500 are fully standards-based with rich implementations from both a and -BB-5 perspective. 2. The EX4500 and QFX3500 are purpose-built oe transit switches. 3. QFX3500 is a purpose-built oe- gateway which includes fungible combined Ethernet/Fibre Channel ports. 4. QFX3500 features a single Packet Forwarding Engine (PFE) design. 5. The EX4500 and QFX3500 switches both include feature-rich L3 capabilities. 6. QFX3500 supports low latency with cut-through switching. Future Direction for oe There are two key initiatives underway within -BB-6, which will prove critical to the adoption of oe for small and large businesses alike. For smaller businesses, a new oe mode has been developed, allowing for a fully functional oe deployment without the need for either the traditional services stack or L3 forwarding. Instead, the oe end devices directly discover and attach to each other through a pure L2 Ethernet infrastructure. This can be as simple as a -enabled Ethernet switch, with the addition of FIP snooping for security. It makes oe simpler than either iscsi or NAS, since it no longer needs a complex Fibre Channel (or oe) switch, and because the oe endpoints have proper discovery mechanisms. This mode of operation is commonly referred to as VN_Node to VN_Node or VN2VN. It can be used by itself for small to medium scale oe deployments, or in conjunction with the existing oe models for larger deployments to allow them to benefit from local L2 connectivity. Copyright 2011, Juniper Networks, Inc. 13
For larger businesses, a set of approaches is being investigated to remove the practical scaling restrictions that currently limit deployment sizes. As this work continues, it is hoped that the standards will evolve not only to solve some of these scaling limitations, but also to more fully address many of the other challenges that arise as a result of blending L2 switching, L3 forwarding, and services. Juniper fully understands these challenges, which are similar to the challenges of blending L2 Ethernet, L3 IP forwarding, and higher level network services for routing. As part of Juniper s 3-2-1 data center architecture, we have already demonstrated many of these approaches with Juniper Networks EX Series Ethernet Switches, MX Series 3D Universal Edge Routers, SRX Series Services Gateways, and Juniper Networks Junos Space. A Brief Note on iscsi Although not the subject of this white paper, it is important to note that the implementation of and products such as the QFX3500 and Juniper Networks QFabric family of products, along with the latest generation of CNAs and storage provides many benefits to iscsi for those deployments where the -BB-5 standards prove too limiting. This is of particular interest given that most CNAs and many storage subsystems can now be deployed through different licensing as either oe or iscsi, giving the end user significant protection against the protocol debate. Conclusion Juniper Networks QFX3500 Switch is the first top-of-rack switch built to solve all of the challenges posed by access-layer convergence. It is the first fully -BB-5-enabled gateway capable of easily supporting upstream switches, including third-party embedded blade shelf switches. It works for both rack-mount and blade servers, and for organizations with combined or separate LAN and storage area network (SAN) teams. It is also the first product to leverage a new generation of powerful ASICs. Industry firsts in many ways, Juniper Networks EX4500 Ethernet Switch and QFX3500 Switch both support an oe transit switch mode, and the QFX3500 also supports oe- gateway mode. They are fully standards-based with rich implementations from both a and -BB-5 perspective and feature-rich L3 capabilities. The QFX3500 Switch is a purpose-built oe- gateway which includes fungible combined Ethernet/ ports, a single PFE design, and low latency cut-through switching. There are a number of very practical server I/O access-layer convergence topologies that can be used as a step along the path to full network convergence. During 2011 and 2012, further events such as LAN on motherboard (LoM), QSFP, 40GbE, and the oe Direct Discovery Direct Attach model will further bring Ethernet economics to oe convergence efforts. About Juniper Networks Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking. The company serves customers and partners worldwide. Additional information can be found at www.juniper.net. Corporate and Sales Headquarters APAC Headquarters EMEA Headquarters To purchase Juniper Networks solutions, Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, CA 94089 USA Phone: 888.JUNIPER (888.586.4737) or 408.745.2000 Fax: 408.745.2100 www.juniper.net Juniper Networks (Hong Kong) 26/F, Cityplaza One 1111 King s Road Taikoo Shing, Hong Kong Phone: 852.2332.3636 Fax: 852.2574.7803 Juniper Networks Ireland Airside Business Park Swords, County Dublin, Ireland Phone: 35.31.8903.600 EMEA Sales: 00800.4586.4737 Fax: 35.31.8903.601 please contact your Juniper Networks representative at 1-866-298-6428 or authorized reseller. Copyright 2011 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice. 2000422-001-EN July 2011 Printed on recycled paper 14 Copyright 2011, Juniper Networks, Inc.