1 WHITE PAPER Adapting Data Centers Using Fan-Out Technology Agoura Road, Calabasas, CA Tel: Fax: Rev. A, August 2013
3 Table of Contents The Increase of Speed and Densities... 4 Transceiver Technology... 5 CFP... 6 QSFP... 6 CXP... 7 Hybrid Transceivers and Cabling... 8 Cabling Details... 9 Disrupting the Traditional Three-Tier Network Moving to an HSE-Enabled, Two-Tier Network Fabric G Leaf/Spine Fabric With 40GE QSFP uplink G Leaf/Spine Fabric...15 Fan-Out Technology...15 The Ixia Solution...18 Conclusion... 20
4 Internet traffic has grown exponentially as greater numbers of users employ more and more complex types of traffic. The need for bandwidth to handle cloud applications, mixedmedia downloads and upload, video streaming, peer-to-peer sharing, and growing wireless usage is obvious. Aggregating 10Gbps links to handle increased bandwidth not only complicates data center topologies, but is also costly and labor-intensive. While 40GE and 100GE connections have improved the issues somewhat, the need for greater densities of higher speed links continues unabated. There is a growing need for larger racks of Higher Speed Ethernet ports with both network manufacturers and service providers. At the same time more ports are needed, IT CAPEX and OPEX costs are being squeezed in order to maximize investments. Perport prices need to be kept as low as possible. An obvious solution is fan-out cabling technology. This white paper will discuss the need, development, application of fan-out technology, how it impacts data center and data center devices, and what Ixia is doing to assist with these issues. In 2007 the IEEE Higher Speed Study Group (HSSG) observed that core networking and computing applications were growing at different rates. The Increase of Speed and Densities In 2007 the IEEE Higher Speed Study Group (HSSG) observed that core networking and computing applications were growing at different rates. On average, core networking bandwidth was observed to be doubling every eighteen months, while the bandwidth associated with computing applications was doubling every twenty-four months. In response, the committee specified two rates in IEEE Std 802.3ba-2010: 40 Gigabit Ethernet and 100 Gigabit Ethernet (GE). These rates are now commonly referred to as Higher Speed Ethernet (HSE). This projected growth predicted that 400Gbps would be needed by 2013, and one Terabit per second (Tbps) by 2015 for core networking. Based on this, the HSSG had to start work on the next level for Ethernet speed once 100 Gigabit Ethernet was completed. John D'Ambrosia, chair of the IEEE Ethernet Bandwidth Assessment Ad Hoc and chief Ethernet evangelist, recently summarized where current pressure for more network bandwidth is actually coming from: Network capacities are constantly pushed upward by applications used in corporate data centers, Internet exchanges, telecommunications carriers, finance, R&D, and from content providers. Companies like Google, Facebook, and others are expressing the need for 1Tbps Ethernet. The IEEE committee has started work on 400Gbps Ethernet with the creation of a new study group, which had its first meeting the week of May 13th in Victoria, British Columbia, Canada. Ixia attended this meeting and is on the forefront of its development, similar to Ixia s participation in the development of the 40 and 100Gbps IEEE standard. While 400Gbps and 1Tbps are likely to be deployed in the coming years off, today vendors are addressing current needs with higher density 40GE and 100GE products for the data center and core networks. In the short term, these companies and can use Link Aggregation Control Protocol (LACP) to address higher bandwidth needs. On July 19th, 2012 the IEEE Working Group released a bandwidth assessment report which provided strategic insight describing drivers across many Ethernet wireline and wireless applications. One of the clearest examples of these drivers includes a study of the global IP network and the following factors contributing to the exploding demand for bandwidth:
5 Increased users The number of Internet users is forecasted to increase from 1.9 billion users in 2010 to 3 billion users in Increased rates and methods - a 4x increase in fixed broadband speeds is forecast for 2015, as the average fixed broadband speed of 7Mbps in 2010 is expected to rise to 28Mbps in 2015 Increased services Forecasted growth (measured in Petabytes/month): mobile data 92%, fixed/wired 24%, fixed/wi-fi 39% To meet the higher port density and test scale requirements needed by network equipment manufacturers and telecommunications carriers for both 40GE and 100GE interfaces, Ixia has been on the forefront of the HSE evolution with the development of the first to market 40/100Gbps test product, Ixia s K2 family of load modules. A few years later Ixia more than doubled the density with the Xcellon-Flex series, providing 4x40GE QSFP+ interfaces in a single slot load module. For even more density a 10/40GE combination load module with 16x10GE SFP+ and 4x40GE QSFP interfaces was added to the Xcellon-Flex series. The Xcellon-Lava is the industry s current leading highest density and performance 40Gbps and 100Gbps test product for both data plane traffic generation and Layer 2-3 routing protocol emulation performance and scale. Transceiver Technology Without a practical understanding of the different Higher Speed Ethernet transceiver technologies, it is unlikely you will be able to bring up an HSE connection since part of the specification defines both new Physical Coding Sublayer and Physical Medium Dependent (PMD). A PMD is a transceiver. The choice of transceiver and required cabling is a significant decision which will impact port density, cost, power requirements, and operating reach. The IEEE Std 802.3ba-2010 (40/100 Gigabit Ethernet (GE) - standard encompasses a number of different Ethernet physical layer (PHY) specifications. A networking device or line card may support different PHY types by means of pluggable modules called transceivers. The choice of transceiver and required cabling is a significant decision which will impact port density, cost, power requirements, reach, and many other factors. The physical layer specification defines: Prefix Speed 40G = 40Gbps 100G=100Gbps Suffix Medium Coding Lanes Copper Optical Scheme Copper Optical K=backplane S=short reach (100m) n=4 or 10 n=# of lanes or wavelengths C=cable assembly L=long reach (10km) E=extended long reach (40km) R=64/66B block coding n=1 not required, serial is implied
6 CFP The CFP MSA was formally launched at OFC/NFOEC 2009 in March by founding members Finisar, Opnext, and Sumitomo/ ExceLight. Optical modules are not standardized by any official body, but are in Multi-Source Agreements (MSAs). One agreement that supports 40 and 100GE is the C Form-factor Pluggable (CFP) MSA ( which was adopted for distances of 100+ meters. The C stands for the Latin word for 100, centum, since the standard was primarily developed for 100GE. Other standards like QSFP and CXP connector modules support shorter distances are discussed later. The CFP MSA was formally launched at OFC/NFOEC 2009 in March by founding members Finisar, Opnext, and Sumitomo/ExceLight. The CFP form factor, as detailed in the MSA, supports both single-mode and multi-mode fiber and a variety of data rates, protocols, and link lengths, including all the PMD interfaces encompassed in the IEEE 802.3ba standard. At 40GE, target optical interfaces include the 40GBASE-SR4 for 100 meters (m) and the 40GBASE-LR4 for 10 kilometers (km). There are three PMDs for 100 GE: 100GBASE-SR10 for 100 m, 100GBASE-LR4 for 10 km, and 100GBASE-ER4 for 40 km. CFP was designed after the Small Form-factor Pluggable transceiver (SFP) interface, but it is a significantly larger form factor to support 100Gbps power and cooling requirements. The electrical connection of a CFP uses 10 x 10Gbps lanes in each direction (RX, TX). The optical connection can support both 10 x 10Gbps and 4 x 25Gbps variants. CFP transceivers can support a single 100Gbps signal like 100GE or OTU4 or one or more 40Gbps signals like 40GE, OTU3, or STM-256/OC-768. The CFP-MSA Committee has defined three form factors: CFP Currently at standard revision 1.4 and is widely available in the market CFP2 Currently at draft revision 0.3 is half the size of the CFP transceiver; these are recently available in the market CFP4 Standard is not yet available, is half the size of a CFP2 transceiver, and the CFP4 PMDs are not available yet. QSFP CFP, CFP2, and CFP4 Transceivers The Quad (4-channel) Small Form-factor Pluggable (often abbreviated as QSFP or QSFP+) is a compact, hot-pluggable transceiver used for data communications applications. It interfaces a network device (switch, router, media converter or similar device) to a fiber optic or copper cable. It is an industry standard format defined by the Small Form Factor Committee (SFF-8436, Rev 3.4, Nov. 12, 2009 Specification for QSFP+ Copper and
7 Optical Modules) and supported by many network component vendors, supporting data rates from 4x10Gbps. The format specification is evolving to enable higher data rates; as of May 2013, highest supported rate is 4x28Gbps (112Gbps) defined in the SFF-8665 document (commonly known as QSFP28) which will support 100GE. This is not widely available yet. The QSFP specification supports Ethernet, Fibre Channel, InfiniBand and SONET/SDH standards with different data rate options. QSFP+ transceivers are designed to support Serial Attached SCSI, 40G Ethernet, 20G/40G Infiniband, and other communications standards as well as copper cable media. QSFP modules increase the port-density by 3x- 4x compared to CFP modules. Common applications include 40GE over copper QSFP+: Included in 802.3ba standard, 7m (23ft) reach and 40GE over Multimode Fiber: OM3: 100 meters, OM4: 150 meters. QSFP+ Pluggable Module QSFP+ transceivers are designed to support Serial Attached SCSI, 40G Ethernet, 20G/40G Infiniband, and other communications standards as well as copper cable media. QSFP modules increase the portdensity by 3x-4x compared to CFP modules. QSFP+ Direct Attached Cable (DAC) using Copper and Active Optical Cable (AOC) using Fiber CXP CXP is targeted at the clustering and high-speed computing markets. It is about ¼ the size of a CFP transceiver providing higher density network interfaces. CXP is a copper connector system specified by the InfiniBand Trade Association. It provides twelve 10 Gbps links suitable for 100 Gigabit Ethernet, three 40 Gigabit Ethernet channels, or twelve 10 Gigabit Ethernet channels or a single Infiniband 12 QDR link.
8 CXP Active Copper, Optical (pluggable), Active Optical It is often the case that two devices need to be connected which do not have the same physical interface... Typical applications of CXP in the data center include 100GE over Copper (CXP): 7m (23ft) and 100GE over multimode fiber: CXP for short reach applications (CFP is used for longer reach applications) Hybrid Transceivers and Cabling Since it is often the case that two devices need to be connected which do not have the same physical interface there are hybrid transceivers and cable assemblies which can convert one form factor to a different form factor. Some examples are: CFP-to-4x10GE-SFP+ CFP-to-QSFP+ One port, or two ports of 40GE QSFP+ CFP to CXP One port 100 GE CXP
9 CXP to QSFP+ CXP (100GE) to 3x40GE QSFP+ fan-out cable QSFP (40GE) to 4x10GE SFP+ fan-out cable Cabling Details In addition to the transceiver technology being used, it is also critical to know the physical media connector (cable) and interface type. Options include: In addition to the transceiver technology being used, it is also critical to know the physical media connector (cable) and interface type. Pluggable Copper RJ45 Direct Attached Cable (DAC) cable (with cable and transceiver assembly) Term is used for copper cables that can be of the passive or active type Pluggable Fiber Single Mode Fiber (SMF) Multimode Fiber (MMF) Active Optical Cable (AOC) with fiber cable and transceiver assembly Term is used for fiber cables that are the active type Active Optical Cable (AOC) is a cabling technology that accepts same electrical inputs as a traditional copper cable, converts the electrical signal to the optical signal known as an electrical-to-optical conversion (i.e. E/O) in the transceiver assembly and it uses optical fiber between the connectors (i.e. transceivers). AOC uses an E/O and and an O/E (the converse of E/O) on the cable ends to improve speed and distance performance of the cable without sacrificing compatibility with standard electrical interfaces.
10 Optical fibers are color-coded according to fiber type. Color coding enables technicians to quickly determine whether a particular cable is multimode (e.g., orange or aqua) or single mode (e.g., yellow or blue). The jacket imprint provides additional information, such as fiber size, fire code rating, and so forth. Be aware that the color of some jacketed fiber varies from this standard. Copper cabling is usually less costly than optical, however optical cable advantages include: 100m plus reach No electromagnetic interference (EMI) 1/3 size of copper cable, 1/3 weight of copper cable High level of cable flexibility for routing the cable within infrastructures Pluggable fiber optical cable connecter types: LC (Round connector pair), standard IEC SC (Square connector pair), standard IEC MPO/MTP (multi fiber ribbon), standard IEC
11 Reach 1m Backplane 7m Copper Cable Assembly 100m OM3/150m OM14 MMF 40 Gigabit Ethernet 10km SMF PMD Name 40GBASE-KR4 40GBASE-CR4 40GBASE-SR4 40GBASE-LR4 40KM SMF Signaling 4x10Gbps 4x10Gbps 4x10Gbps 4x10Gbps Media Backplane Twinax Parallel MMF Duplex SMF Module QSFP QSFP CFP/QSFP 100 Gigabit Ethernet PMD Name 100GBASE-CR10 100GBASE-SR10 100GBASE-LR4 100GBASE-ER4 Signaling 10x10Gbps 10x10Gbps 4x25Gbps 4x25Gbps Media Twinax Parallel MMF Duplex SMF Duplex SMF Module CXP CFP or CXP CFP CFP Example transceiver/media table for various distances Disrupting the Traditional Three-Tier Network Fan-out technology for switches and servers has been widely adopted by network equipment manufacturers over the last few years. Fan-out technology is enabled on different physical port types and with different cable media types that can make for many combinations and fanned-out configurations. The benefits of fan-out technology, and understanding what has been driving the changes in traditional network designs, are not readily apparent. In this blog, we will delve into why traditional network architectures for data centers must change, and how fan-out technology plays into newer data center designs. For a number of years the hierarchical, three-tier network design for data centers has prevailed over the entire networking industry. Cisco Systems, Inc. essentially pioneered the three-tier network structure as shown in the illustration below. Fan-out technology for switches and servers has been widely adopted by network equipment manufacturers over the last few years. CORE DISTRIBUTION ACCESS
12 This model has broad worldwide implementation. The model relies upon each area Core, Distribution, and Access having network equipment targeted for their specific tasks. Each of these areas performs a targeted set of functions related to their area of responsibility and interaction with the other layers. The scalability of the three-tier design is vertical, meaning clients-to-servers via applications like web or talk to each other from the Access to the Distribution to the Core and conversely. This is called north-south traffic. In a data center network, the three-tier design relies upon several aspects of the network equipment, layer 2 and layer 3 protocols to manage traffic and segregate clients (such as VLANs) or static connections, and IP routing (and more). Consider that in today s world most servers have multicore processors, making them able to pump out more application based traffic over faster interfaces like 10GE and 40GE seems logical. To scale this design for more bandwidth and throughput, it often means adding more line cards (blades) to the equipment in the Distribution layer and sometimes to the Core. At the Distribution layer, some ports can be exchanged for higher speed ports but only if they connect to an equal number of higher speed interfaces at the Core. Higher speed interfaces such as 40GE or even 100GE will then connect to matching Core interfaces. These will handle higher bandwidth demands in the Distribution layer that are created by the increased number of clients in the Access layer and servers in Distribution layer. This is generally an expensive way to scale. Distribution to Core bandwidth may now be made available, but is not always used at full capacity adding more capacity that may be idle (under-used) is expensive. For application and user scalability, more servers must be added to the Distribution layer to handle the increase in application traffic from clients (via Access switches), and from other application servers in the Distribution layer. Servers in the three-tier design are usually segregated into specific areas such as web servers, servers, authentication servers, and other devices such as load balancers and firewalls. The Distribution layer manages the traffic to and from these servers, and the traffic to and from the Access and the Core. This is a heavy load. Each piece of Distribution layer network equipment is also connected to some number of Access switches. Adding more network bandwidth (higher speed interfaces) and more servers will allow the Distribution layer to accommodate more clients and applications. However, there is a penalty. The Distribution layer is saddled with significantly large loads that tax the abilities of its computing and memory resource capacities, to manage the layer-3 protocols, as well as increases in the total number of servers and their traffic that it must also support. Consider that in today s world most servers have multicore processors, making them able to pump out more application based traffic over faster interfaces like 10GE and 40GE seems logical. This further increases the burden on the Distribution layer. A northsouth traffic design will scale until the network equipment saturates the physical ports available on the Distribution equipment, and/or until the computing and memory resources of the Distribution equipment are consumed by layer 3 protocols and application-based server traffic. What happens if a Data Center network has to go from hundreds of servers to several thousands of servers? Can the Distribution layer in this design scale at a reasonable overall cost? Most likely not. The Distribution layer is over-burdened in this design. It can be a bottleneck in today s application environment. Enter the Cloud, and Cloud computing, and Virtual machines what do these requirements do to the three-tier network approach? Cloud computing and the many new Virtual Machine applications that run over the cloud, run over hundreds, or thousands, or many thousands of servers, and they create what is known as east-west traffic. East-west traffic is from server-to-server. In the three-tier design east-west traffic will use up ALL the resources of the Distribution layer. Hence, a different network design approach is required.
13 Moving to an HSE-Enabled, Two-Tier Network Fabric Cloud computing and the many new virtual machine applications that run over the cloud run over hundreds, thousands, or many thousands of servers create east-west traffic. East-west traffic is from server-to-server. In the three-tier design, east-west traffic will use up ALL the resources of the Distribution layer. Hence, a different network design approach is required enter the two-tier leaf and spine network design, shown below. Two-Tier Leaf-Spine Network Fabric Architecture Network Fabric The idea behind a leaf and spine network is to make scalability, flexibility, high throughput, and low latency the attributes used for scaling up the data center employing simpler network switches than were used in the traditional Distribution layer. The leaf and spine approach increases the data center to thousands of servers by re-constructing the Distribution layer and spreading out (or flattening) the load. The approach for a leaf and spine network is that leaf switches are used to connect to servers and client switches. Every leaf switch is connected, via uplink ports, to every spine switch. On the leaf switch, all of the uplink ports connect to the spine switches. Leaf switch ports are used are connected to client switches and servers. Spine switch ports are used to connect to the core edge (CE) device and interconnect all the leaf switches (there are several variations dealing with connecting to edge devices that won t be covered here). The fundamental leaf and spine network design is illustrated below. Spine Leaf The idea behind a leaf and spine network is to make scalability, flexibility, high throughput, and low latency the attributes used for scaling up the data center employing simpler network switches than were used in the traditional Distribution layer. Core or Core Edge Device Spine L3-Aware Spine Switching Leaf L2 Leaf Switching (TOR) Client switches, Servers, Server Racks...
14 Every leaf is connected to every spine. ECMP is the management protocol for the full mesh between leaf and spine. The spine switches interconnect all the leaf switches. The spine may use OSPF or BGP layer-3 protocols to maintain routes to the core edge. Servers and client switches are not more than one hop from the next server in a massive layer 2 switch topology. This allows the leaf and spine, full-mesh Ethernet layer 2 network to forward east-west traffic with very low latency across the mesh, and to and from the spine with minimal protocol overhead (i.e., less load on switch computing resources). This network implementation allows servers to be moved and easily re-mapped back onto the layer-2 Ethernet network providing improved flexibility. The scalability of the leaf and spine networks is incredibly large. The scalability of the leaf and spine networks is incredibly large. A great example is provided by author, Brad Hedlund ( Brad has many articles and videos published on the web regarding network design. We will paraphrase Brad s examples of scalability using fan-out technology at the leaf switch level, and what physical port fan-out does to increase scalability in the leaf and spine network design approach. Let s build a network that supports GE servers in one fabric with 2.5:1 oversubscription. In the future, the network must seamlessly expand to over GE servers without increasing latency or oversubscription. We will compare two different approaches. In the first example below, the spine switches each have 32 ports of 40GE. Each leaf TOR switch is attached to the spine with 4 ports of 40GE using the leaf switches 40GE QSFP uplink ports. There will be 40 servers per rack connected to each leaf switch. Each leaf switch will connect to each spine switch. In the second example below, we will take the same leaf and spine switch setup and change the uplink configuration of the fan-out of the leaf switches 40GE QSFP ports to 4x10GE per QSFP port. This provides 16 x 10GE uplink ports to connect to spine switches. We will compare the 40GE direct uplink and the fan-out 4x10GE uplink configurations and their inherent difference in maximum server scalability. 40G Leaf/Spine Fabric With 40GE QSFP uplink The number of connections used for uplinks from each leaf switch determines the total number of spine switches in the design. And, the number of ports on each Spine switch determines the total number of leaf switches. 40 Leaf/Spine Spine x4 Leaf x32 40 x 10GE 1280 x 10GE This 40GE fabric allows a maximum is GE servers at 2.5:1 oversubscription. This meets the initial scale target of 1200 servers however, it cannot scale larger. Before the
15 network can achieve the 5000 server design goal, the 40GE design will have to be rearchitected. Let s see what level of server scalability can be achieved when we fan-out the leaf switch uplink ports with 4x10GE each. 10G Leaf/Spine Fabric Each QSFP leaf port is now fanned out to 4 x 10G interfaces each, for a total of 16 10GE uplinks. A QSFP to SFP+ optical fan-out cable is used. How many servers can this design scale up to? Spine 16x16GE Leaf 10 Leaf/Spine x16 x128 The 10GE leaf and spine fabric design provides four times greater scalability compared to the 40GE fabric design. 40 x 10GE 5120 x 10GE Each spine switch now has 128 ports of 10GE. Each leaf switch is attached to the spine with 16 ports of 10GE. In this 10GE fabric the maximum scale is GE servers at 2.5:1 oversubscription. Initially, this fabric can be built to 1200 servers and seamlessly scale in the future to over 5000 servers, all with the same bandwidth, latency, and oversubscription. The 10GE leaf and spine fabric design provides four times greater scalability compared to the 40GE fabric design. All of the hardware is the same between the two designs. And all of the bandwidth, latency, and oversubscription are the same as well. Two simple principles of scaling a leaf and spine fabric are demonstrated here: port count, port count, and more port count. The uplink port count on the leaf switch determines the maximum number of spine switches. The spine switch port count determines the maximum number of leaf switches. The two scenarios above are perfect examples of how fan-out technology may be used to scale up a data center fabric, and how the total bandwidth and use of high-density switches make massively scalable fabrics affordable and simpler compared to the traditional three-tiered network design. Fan-out technology helps to enables these new levels of scalability. Fan-Out Technology One of the latest enabling technologies to help increase port densities and lower costs are known as fan-out cables. At first look, a fan-out cable seems very simple. It takes one (large bandwidth) physical interface and breaks it out into several (smaller bandwidth) interfaces. An example of this is taking a 100GE CXP interface and using a fan-out cable to break it into 2 or 3 40GE QSFP+ interfaces using a fan-out cable (and transceivers).
16 However, the important and complex lower layer technologies are critically important to making this magic happen. To understand how and why fan-out technology works, we need to do a quick refresher on the Higher Speed Ethernet (HSE) standard, IEEE Std 802.3ba This standard defines Ethernet at 40Gbps and 100Gbps. The HSE standard defined a new layered model shown below: To understand how and why fan-out technology works, we need to do a quick refresher on the Higher Speed Ethernet (HSE) standard, IEEE Std 802.3ba Without getting into too much detail, the standard specified a coding scheme using 64B/66B block encoding and defined a number of physical lanes (or wavelengths): Copper: 4 or 10 Optical: 4 or 10 The magic really happens in the physical coding sublayer (PCS) with multilane distribution (MLD). The PCS is designed to support current PHY types and will support future PHY types that may be developed fueled by continuous advances in electrical and optical transmission. The PCS layer also performs the following functions: Frame delineation Transportation of control signal Ensuring necessary clock transition density as needed by the physical optical and electrical technology Stripe and re-assemble the information across multiple lanes
17 PCS lane distribution example 40Gbps and 100Gbps use parallel electrical lanes with multiple fiber optical media. The 40Gbps PCS runs at Gbps over each of 4 physical lanes (in each direction) of 10Gbps, combining for 40Gbps. The 100Gbps PCS uses 10 physical lanes (in each direction) of 10Gbps each, combining for 100Gbps. The parallel, higher-speed per-lane schema used in HSE, transmitters, and receivers required a new, more flexible PCS mechanism (i.e., Multi-Lane Distribution [MLD]). Two key advantages of the MLD methodology are that all the encoding, scrambling, and deskew functions can all be implemented in a CMOS device, which is expected to reside on the host device, and minimal processing of the data bits (other than bit-muxing) happens in the high-speed electronics embedded with an optical module. This simplifies the functionality and ultimately lowers the costs of these high-speed optical interfaces. Leveraging the PCS and MLD, the fan-out (physical layer) technology is a physical split of the 10GE lanes. The MSA defines how the lanes should be grouped. Vendors implementing fan-out support must support changing the mode of the port to properly define how the lanes should be grouped. For example, in a 100GE CXP adaptor there are 12 10G lanes (since it was developed to support Ethernet and Infiniband). A fan-out cable enables splitting out the 12 lanes into pairs usable for 10GE or 40GE. Speed 100GE 40GE 10GE Fan-out/lanes used Uses lanes 1-10 (two lanes 0 and 11 are not used) Can use lanes 0-3, 4-7, 8-11 for up to 3x40GE fan-out Can use any of the lanes for up to 12x10GE fan-out Two key advantages of the MLD methodology are that all the encoding, scrambling, and deskew functions can all be implemented in a CMOS device, which is expected to reside on the host device, and minimal processing of the data bits (other than bit-muxing) happens in the highspeed electronics embedded with an optical module. CXP fan-out table
18 Another consideration for fan-out technology is that for each group of fan-out ports there must also be a MAC engine and the entire IEEE reconciliation layers behind it. Examples of CXP 100GE-to-3x40GE QSFP fan-out cables QSFP+ uses 4 lanes for 40GE. A fan-out cable enables splitting out the 4 lanes into Tx/Tx pairs usable for 10GE transmissions. Speed Fan-out/lanes used 40GE Uses all for lanes, GE Can use any of the lanes for up to 4x10GE fan-out QSFP fan-out table Certain technologies do not use a 10GE lane. An example is QSFP28 when set to the 100GE speed. QSFP28 uses a 4x28GE mechanism which prevents it from being able to fan-out to lower standard rates. This can be considered a disadvantage for this interface since it won t likely support fan-out technology. Another consideration for fan-out technology is that for each group of fan-out ports there must also be a MAC engine and the entire IEEE reconciliation layers behind it. For example, a 40GE QSFP port in fan-out to 4x10GE links must have four individual 10GE MACs running behind each group of electrical lanes. The port electronics must be able to switch from a single 40GE MAC to 4 10GE MACs, and back again to a single 40GE MAC. This process is integral in assuring that the lanes have a correct and trackable MAC assignment -- it is crucial to making fan-out technology actually work. The benefits of leveraging fan-out technology are quite compelling. It enables equipment manufactures to produce networking hardware capable of supporting multi-rate, and can expand port densities on a given line card. This usually results in lower cost per port, less heat generation, and less power consumption, all of which the savings are passed on to the end consumer. The Ixia Solution Ixia has developed the Xcellon-Multis load module to accommodate the growing need for more HSE testing ports with lower costs. With the new fan-out technology inherent in our latest member of the Xcellon family of load modules, Ixia has moved to anticipate our customer's needs. Using cable fan-out technology, we deliver the needed HSE Ethernet testing testing that allows higher port speeds to fan-out to several ports (links) of a lower speed:
19 Multiple speeds from a single port 100GE fan-out technology separates a single physical interface into multiple physical interfaces that are 40GE or 10GE-capable, providing high-density interfaces over multiple speeds. This increases interface flexibility and facilitates a wide range of interoperability testing. Multi-personality interfaces from the same port Fan-out technology means having 100/40/10GE speed support, all-in-one high density load module, with support for multiple interface types: CXP, QSFP, and MT fiber cable interfaces. Soon, support for 10GE SFP+ LC connector interfaces will be available, facilitating multi-speed tests on a single card. A common feature set across multiple speeds The same features for 100GE and 40GE all able to be used from a single port or a group of ports Enable higher port densities per chassis 2x the 100GE capacity and 3x the 40GE capacity The new Xcellon- Multis module with fan-out technology allows you do more with less, and to spend less for more impact... The new Xcellon-Multis module with fan-out technology also allows you to spend less for more impact: Improved ROI A simple fan-out cable allows the customer to have 3 ports of 40GE QSFP at a lower cost than a new, full load module. This makes for a more versatile chassis and test bed, with multiple interfaces in a single slot card. The industrystandard fan-out technology allows you to test confidently. Less slots used With fan-out technology choosing between 100 and 40GE speeds, you can reduce the HSE interface footprint in your chassis, while increasing your testing options Less power used Fan-out technology allows the user to have 100GE/40GE test ports all emanating from a single card Less cooling demands in the lab Every BTU of cooling saved translates into reduced on energy costs, and fewer load modules means less heat
20 Conclusion Driven by the enormous growth in Internet users and access devices, the total bandwidth requirements of a single switch or router today is in multiple or dozens of Terabits. Service providers and enterprises need devices that scale to match customer demands. Network equipment manufacturers need devices that scale up to hundreds of 40GE and 100GE ports, instead of dozens. To meet the higher port density and test scale requirements needed by network equipment manufacturers and telecommunications carriers for both 40GE and 100GE interfaces, new fan-out cabling technology is being implemented by Ixia. This technology addresses both the need for high HSE port densities and the need to keep costs at a manageable level.
White Paper An Overview of Next-Generation 100 and 40 Gigabit Ethernet Technologies 26601 Agoura Road, Calabasas, CA 91302 Tel: 818.871.1800 Fax: 818.871.1805 www.ixiacom.com 915-0908-01 Rev D, September
The ABC of Direct Attach Cables Thomas Ko, RCDD Product Manager TE Connectivity Network Speed and Technology 10/40/100 Gbps the transition is occurring right now What are Customers Saying about their 10/40/100
100 Gbe and Beyond Diagram courtesy of the CFP MSA. v1.4 2011/11/21 Agenda Overview 28 Gbps Common Electrical Interfaces (CEI) 100GBASE-XXX and the 10x10 MSA New 100 Gbps Media Modules 100 GbE Developments
40 and 100 Gigabit Ethernet Overview Abstract: This paper takes a look at the main forces that are driving Ethernet bandwidth upwards. It looks at the standards and architectural practices adopted by the
Whitepaper 10 Things to Know Before Deploying 10 Gigabit Ethernet Table of Contents Introduction... 3 10 Gigabit Ethernet and The Server Edge: Better Efficiency... 3 SAN versus Fibre Channel: Simpler and
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
Latest Trends in Data Center Optics IX Fórum 9 December 8, 2015 Christian Urricariet Finisar Corporation 1 Finisar Corporation Technology Innovator. Broad Product Portfolio. Trusted Partner. Optics industry
Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Version 1.1 June 2010 Authors: Mark Nowell, Cisco Vijay Vusirikala, Infinera Robert Hays, Intel 1. This work represents
Technical White Paper: 10GBASE-T SFP+ Transceiver Module February 24, 2016 10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling Enabling More Cat 6a Connectivity for 10GbE Networking
WHITE PAPER www.brocade.com High- Performance Networks 40 Gigabit and 100 Gigabit Ethernet Are Here! The 40 Gigabit and 100 Gigabit Ethernet standards were adopted by the Institute of Electrical and Electronics
100 Gb/s Ethernet/OTN using 10X10 MSA Optical Modules David Lewis, JDSU Jim Tavacoli, Santur Scott Kipp, Brocade Bikash Koley, Google Vijay Vusirikala, Google Executive Summary: 10X10 MSA based optical
The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration Bill Charuk, SR Product Manager, Data Center Solutions Introduction With the advent of data centers in communication infrastructure, various
SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.
White Paper 25GbE: The Future of Ethernet Ixia and QLogic Validate End-to-End Interoperability of 25 Gigabit Ethernet Executive Summary/Abstract Reliance on networking permeates every aspect of our world,
40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview November 2008 Authors: John D Ambrosia, Force10 Networks David Law, 3COM Mark Nowell, Cisco Systems 1. This work represents the opinions
May 2013 Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center Executive Summary There is no single end-all cabling configuration for every data center, and CIOs, data center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable
White Paper Best Practices for 40 Gigabit Implementation in the Enterprise 26601 Agoura Road, Calabasas, CA 91302 Tel: 818.871.1800 Fax: 818.871.1805 www.ixiacom.com 915-6506-01 Rev. B, June 2013 2 Table
Optimizing Infrastructure Support For Storage Area Networks December 2008 Optimizing infrastructure support for Storage Area Networks Mission critical IT systems increasingly rely on the ability to handle
Uncompromising Integrity Making 100Gb/s deployments as easy as 10Gb/s Supporting Data Rates up to 100Gb/s Mellanox 100Gb/s LinkX cables and transceivers make 100Gb/s deployments as easy as 10Gb/s. A wide
High Speed Ethernet Dr. Sanjay P. Ahuja, Ph.D. Professor School of Computing, UNF Hubs and Switches Hubs and Switches Shared Medium Hub The total capacity in the shared medium hub configuration (figure
WP_LowLoss_D_B 10/13/14 2:08 PM Page 2 The Need for Low-Loss Multifiber Connectivity In Today s Data Center Optical insertion loss budgets are now one of the top concerns among data center managers, especially
Intel IT IT Best Practices Data Centers January 2011 Upgrading Data Center Network Architecture to 10 Gigabit Ethernet Executive Overview Upgrading our network architecture will optimize our data center
Ethernet 102: The Physical Layer of Ethernet Scott Kipp President of the Ethernet Alliance (Brocade) Frank Yang Marketing Chair, Next Generation Cabling Subcommittee (CommScope) February 27th, 2012 1 The
Optical Trends in the Data Center Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Cable Systems Data Center Environment Data Center Environment Higher speeds Higher density
Data Center: Technology Drivers Infrastructure Trends 40G / 100G Migration Maurice F. Zetena III Vice President Leviton Technology Data Center Group Leviton Network Solutions Technology Drivers Convergence
Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is
Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices Robert Reid (Panduit) Frank Yang (CommScope, Inc) 1 The Presenters Robert Reid Sr. Product Development Manager at Panduit Frank Yang Marketing
Application Delivery Testing at 100Gbps and Beyond The Need for Speed 10 Gigabit Ethernet (GE) rapidly became the technology of choice for high speed connections to servers and network devices. Advancements
Low Cost 100GbE Data Center Interconnect Chris Bergey VP Marketing, Luxtera email@example.com February 2012 1 Low Cost 100GbE Data Center Interconnect Where does 100GbE come into the data center Challenges
The Ethernet Roadmap Applications, servers, optics, and switches Scott Kipp April 15, 2015 www.ethernetalliance.org Agenda 4:40- :52 The 2015 Ethernet Roadmap Scott Kipp, Brocade 4:52-5:04 Optical Ethernet
Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Version 1.0 August 2007 Authors: Mark Nowell, Cisco Vijay Vusirikala, Infinera Robert Hays, Intel ethernet alliance p.o.
Pluggable Optics for the Data Center Pluggable Optics for the Data Center The data center ecosystem is going through unprecedented growth and innovation as new players, new business models and new technologies
HIGHSPEED ETHERNET THE NEED FOR SPEED Jim Duran, Product Manager - Americas WHITE PAPER Molex Premise Networks EXECUTIVE SUMMARY Over the years structured cabling systems have evolved significantly. This
Brochure More information from http://www.researchandmarkets.com/reports/2891858/ Optical Data Center Markets: Volume I Optical Opportunities Inside the Data Center Description: Traffic from data centers
Arista 40G Cabling and Transceivers: Q&A 40G Cabling Technical Q&A Document 40Gigabit Cables and Transceivers Q. What 40G cables and transceivers are available from Arista? A. Arista supports a full range
Untangled: Improve Efficiency with Modern Cable Choices PRESENTATION TITLE GOES HERE Dennis Martin President, Demartek Agenda About Demartek Why Discuss Cables and Connectors? Cables Copper Fiber-Optic
Data Center Optimization: Component Choice and Innovative Design Combine to Allow the Creation of an Enabling Infrastructure Data Center Spaces Data Center Spaces as Defined by the TIA 942 Standard Source:
Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013 Overview Oracle Virtual Networking revolutionizes data center economics by creating an agile, highly efficient infrastructure
Trends In Data Rate And Link Length In Evolving Optical Standards David Cunningham 22 nd September 2013 Outline Building and Data Centre link lengths Trends for standards-based electrical interfaces Data
Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications
Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches Panduit and Cisco Accelerate Implementation of Next-Generation Data Center Network Architecture 2013 Cisco Panduit.
Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity Needs in the Data Center With the continued requirement for expansion and growth in the data center, infrastructures must provide
The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center Doug Coleman, Manager, Technology & Standards, Corning Cable Systems Optical connectivity with OM3/OM4 laser-optimized
An Oracle White Paper May 2012 Oracle Big Data Appliance: Datacenter Network Integration Disclaimer The following is intended to outline our general product direction. It is intended for information purposes
Gen 6 Fibre Channel What You Need to Know PRESENTATION TITLE GOES HERE Craig W. Carlson QLogic Corporation Agenda Gen 6 Fibre Channel What is Gen 6 Fibre Channel? Generation-Based Naming What will drive
Cloud-Based Apps Drive the Need for Frequency-Flexible Generators in Converged Data Center Networks Introduction By Phil Callahan, Senior Marketing Manager, Timing Products, Silicon Labs Skyrocketing network
HIGH QUALITY ACCESSORIES AND INFRASTRUCTURE FOR DATA CENTRES PRODUCT CATALOG 12/2014 PRO-ZETA a.s. COMPANY PROFILE PROZETA is a high tech IT company founded in 1991, providing hardware distribution, connectivity
Introduction Migration Strategy for 0G and 00G Ethernet over Multimode Fiber There is a need for increased network bandwidth to meet the global IP traffic demand, which is growing at a rate of 0% per year
Data Center Design for 40/100G With the continued requirement for expansion and scalability in the data center, infrastructures must provide reliability, manageability and flexibility. Deployment of an
Simplify Your Data Center Network to Improve Performance and Decrease Costs Summary Traditional data center networks are struggling to keep up with new computing requirements. Network architects should
Arasan Chip Systems Inc. White Paper 10 Gigabit Ethernet: Scaling across LAN, MAN, WAN By Dennis McCarty March 2011 Overview Ethernet is one of the few protocols that has increased its bandwidth, while
A Whitepaper on Building Data Centers with Dell MXL Blade Switch Product Management Dell Networking October 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Specifying Optical Fiber for Data Center Applications Tony Irujo Sales Engineer firstname.lastname@example.org Outline Data Center Market Drivers Data Center Trends Optical Fiber and Related Standards Optical Fiber
802.3 Higher Speed Study Group Objectives and Work Areas Mark Nowell, Gary Nicholl Cisco Systems Knoxville, September 20-21, 2006 1 Topics Market Drivers for 100GE Why 100GE? Some proposed objectives for
Non-blocking Switching in the Cloud Computing Era Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network...
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
Visit: www.leviton.com/aristanetworks Arista and Leviton Technology in the Data Center Customer Support: 1-408-547-5502 1-866-476-0000 Product Inquiries: 1-408-547-5501 1-866-497-0000 General Inquiries:
Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking Important Considerations When Selecting Top-of-Rack Switches table of contents + Advantages of Top-of-Rack Switching.... 2 + How to Get from
Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches Introduction Best practices for I/O connectivity in today s data centers configure each server with redundant connections to each of two
Network Simplification with Juniper Networks Technology 1 Network Simplification with Juniper Networks Technology Table of Contents Executive Summary... 3 Introduction... 3 Data Center Network Challenges...
WHITE PAPER Network Simplification with Juniper Networks Technology Copyright 2011, Juniper Networks, Inc. 1 WHITE PAPER - Network Simplification with Juniper Networks Technology Table of Contents Executive
Filename: SAS - PCI Express Bandwidth - Infostor v5.doc Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI Article for InfoStor November 2003 Paul Griffith Adaptec, Inc. Server
10 Gigabit Ethernet (10GbE) and 10Base-T - RoadMap Ethernet (10 Mbps) wasn't fast enough. Fast Ethernet (100 Mbps) wasn't fast enough. Even Gigabit Ethernet (1000 Mbps) wasn't fast enough. So IEEE recently
An Oracle White Paper February 2014 Oracle Exalogic Elastic Cloud: Datacenter Network Integration Disclaimer The following is intended to outline our general product direction. It is intended for information
From the April, 2014 Issue of Cabling Installation & Maintenance Magazine Considerations for choosing top-of-rack in today's fat-tree switch fabric configurations Pros and cons exist for each configuration,
Chapter 4 Connecting to the Internet through an ISP 1. According to Cisco what two things are essential to gaining access to the internet? a. ISPs are essential to gaining access to the Internet. b. No
Introduction By far, the most widely used networking technology in Wide Area Networks (WANs) is SONET/SDH. With the growth of Ethernet now into Metropolitan Area Networks (MANs) there is a growing need
The An Important Tool for Next Generaton Lab Management As a network equipment manufacturer, service provider, carrier, or enterprise, chances are good that you have encountered challenges in your test
Active Optical Cables for InfiniBand QDR and FDR SC11 Seattle, WA Steffen Koehler November 16, 2011 World s Largest Supplier of Fiber Optic Components Company Highlights Market leader: Volume and Revenue
Blade Switches Don t Cut It in a 10 Gig Data Center Zeus Kerravala, Senior Vice President and Distinguished Research Fellow, email@example.com Introduction: Virtualization Drives Data Center Evolution
Quick Reference Guide The pluggable I/O interface offers significant advantages as a high speed I/O interconnect. With a standard equipment I/O interface and the flexibility of pluggable modules come the
The Future of Cloud Networking Idris T. Vasi Cloud Computing and Cloud Networking What is Cloud Computing? An emerging computing paradigm where data and services reside in massively scalable data centers
100 Gigabit Ethernet is Here! Introduction Ethernet technology has come a long way since its humble beginning in 1973 at Xerox PARC. With each subsequent iteration, there has been a lag between time of
Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?
The 8-Fiber Solution for Today s Data Center Bill Charuk, SR Product Manager, Data Center Solutions Introduction Data center infrastructure is constantly evolving. Fiber optic installations have used the