1 40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview June 2010 Authors: John D Ambrosia, Force10 Networks David Law, 3COM Mark Nowell, Cisco Systems 1. This work represents the opinions of the authors and does not necessarily represent the views of their affiliated organizations or companies. [Type text] Ethernet Alliance 3855 SW 153rd Drive Beaverton, OR
2 Page 2 Executive Summary The IEEE Std 802.3ba Gb/s and 100 Gb/s Ethernet amendment to the IEEE Std Ethernet standard was approved on June 17, 2010 by the IEEE Standards Association Standards Board. The IEEE P802.3ba Task Force developed a single architecture capable of supporting both 40 and 100 Gigabit Ethernet, while producing physical layer specifications for communication across backplanes, copper cabling, multi mode fibre, and single mode fibre. This white paper provides an overview of 40 and 100 Gigabit Ethernet underlying technologies. Introduction For more than 30 years, Ethernet has evolved to meet the growing demands of packet switched networks. It has become the unifying technology enabling communications via the Internet and other networks using Internet Protocol (IP). Due to its proven low cost, known reliability, and simplicity, the majority of today s internet traffic starts or ends on an Ethernet connection. This popularity has resulted in a complex ecosystem between carrier networks, enterprise networks, and consumers creating a symbiotic relationship between its various parts. In 2006, the IEEE working group formed the Higher Speed Study Group (HSSG) and found that the Ethernet ecosystem needed something faster than 10 Gigabit Ethernet. The growth in bandwidth for network aggregation applications was found to be outpacing the capabilities of networks employing link aggregation with 10 Gigabit Ethernet. As the HSSG studied the issue, it was determined that computing and network aggregation applications were growing at different rates. For the first time in the history of Ethernet, a Higher Speed Study Group determined that two new rates were needed: 40 gigabit per second for server and computing applications and 100 gigabit per second for network aggregation applications. The IEEE P802.3ba 40 Gb/s and 100 Gb/s Ethernet Task Force was formed in January 2008 to develop a 40 Gigabit Ethernet and 100 Gigabit Ethernet draft standard. Encompassed in this effort was the development of physical layer specifications for communication across backplanes, copper cabling, multimode fibre, and single mode fibre. Continued efforts by the Task Force led to the approval of the IEEE Std 802.3ba Gb/s and 100 Gb/s Ethernet amendment to the IEEE Std Ethernet standard on June 17, 2010 by the IEEE Standards Board. This white paper provides an overview of the IEEE Std 802.3ba Gb/s and 100 Gb/s Ethernet standard and the underlying technologies.
3 The 40 Gigabit and 100 Gigabit Ethernet Objectives The objectives that drove the development of this standard are listed below with a summary of the physical layer specifications provided in Table 1. Support full duplex operation only Preserve the / Ethernet frame format utilizing the media access controller (MAC) Preserve minimum and maximum frame size of current standard Support a bit error rate (BER) better than or equal to at the MAC/ physical layer service interface Provide appropriate support for optical transport network (OTN) Support a MAC data rate of 40 gigabit per second Provide physical layer specifications which support 40 gigabit per second operation over: at least 10km on single mode fibre (SMF) at least 100m on OM3 multi mode fibre (MMF) at least 7m over a copper cable assembly at least 1m over a backplane Support a MAC data rate of 100 gigabit per second Provide physical layer specifications which support 100 gigabit per second operation over: at least 40km on SMF at least 10km on SMF at least 100m on OM3 MMF at least 7m over a copper cable assembly Page 3 40 Gigabit Ethernet 100 Gigabit Ethernet At least 1m backplane At least 7m copper cable At least 100m OM3 MMF At least 150m OM4 MMF At least 10km SMF At least 40km MMF Table 1 Summary of Physical Layer Specifications for IEEE 802.3ba
4 The 40 Gigabit Ethernet and 100 Gigabit Ethernet Architecture The IEEE Std 802.3ba 2010 amendment specifies a single architecture, shown in Figure 1, that accommodates 40 Gigabit Ethernet and 100 Gigabit Ethernet and all of the physical layer specifications under development. The MAC layer, which corresponds to Layer 2 of the OSI model, is connected to the media (optical or copper) by an Ethernet PHY device, which corresponds to Layer 1 of the OSI model. The PHY device consists of a physical medium dependent (PMD) sublayer, a physical medium attachment (PMA) sublayer, and a physical coding sublayer (PCS). The backplane and copper cabling PHYs also include an auto negotiation (AN) sublayer and a forward error correction (FEC) sublayer. Page 4 Figure 1 IEEE 802.3ba Architecture
5 Page 5 The Physical Coding Sublayer (PCS) As shown in Figure 1, the PCS translates between the respective media independent interface (MII) for each rate and the PMA sublayer. The PCS is responsible for the encoding of data bits into code groups for transmission via the PMA and the subsequent decoding of these code groups from the PMA. The Task Force developed a low overhead multilane distribution scheme for the PCS for 40 Gigabit Ethernet and 100 Gigabit Ethernet. This scheme has been designed to support all PHY types for both 40 Gigabit Ethernet and 100 Gigabit Ethernet. It is flexible and scalable, and will support any future PHY types that may be developed, based on future advances in electrical and optical transmission. The PCS layer also performs the following functions: Delineation of frames Transport of control signals Ensures necessary clock transition density needed by the physical optical and electrical technology Stripes and re assembles the information across multiple lanes The PCS leverages the 64B/66B coding scheme that was used in 10 Gigabit Ethernet. It provides a number of useful properties including low overhead and sufficient code space to support necessary code words, consistent with 10 Gigabit Ethernet. The multilane distribution scheme developed for the PCS is fundamentally based on a striping of the 66 bit blocks across multiple lanes. The mapping of the lanes to the physical electrical and optical channels that will be used in any implementation is complicated by the fact that the two sets of interfaces are not necessarily coupled. Technology development for either a chip interface or an optical interface is not always tied together. Therefore, it was necessary to develop an architecture that would enable the decoupling between the evolution of the optical interface widths and the evolution of the electrical interface widths. The transmit PCS, therefore, performs the initial 64B/66B encoding and scrambling on the aggregate channel (40 or 100 gigabits per second) before distributing 66 bit block in a round robin basis across the multiple lanes, referred to as PCS Lanes, as illustrated in Figure 2. The number of PCS lanes needed is the least common multiple of the expected widths of optical and electrical interfaces. For 100 Gigabit Ethernet, 20 PCS lanes have been chosen. The number of electrical or optical interface widths supportable in this architecture is equivalent to the number of factors of the total PCS lanes. Therefore, 20 PCS lanes support interface widths of 1, 2, 4, 5, 10 and 20 channels or wavelengths. For 40 Gigabit Ethernet 4 PCS lanes support interface widths of 1, 2, and 4 channels or wavelengths.
6 Page 6 M1 #2n+1 #n+1 #1 M1 PCS Lane 1 Aggregate Stream of 64/66b words M2 #2n+2 #n+2 #2 M2 PCS Lane 2 #2n+1 #2n #n+2 #n+1 #n #2 #1 = 66-bit word Simple 66b word round robin M Mn #3n #2n #n M Mn PCS Lane n Lane markers Figure 2 PCS Multilane Distribution Concept Once the PCS lanes are created they can then be multiplexed into any of the supportable interface widths. Each PCS lane has a unique lane marker, which is inserted once every 16,384 blocks. All multiplexing is done at the bit level. The round robin bit level multiplexing can result in multiple PCS lanes being multiplexed into the same physical channel. The unique property of the PCS lanes is that no matter how they are multiplexed together, all bits from the same PCS lane follow the same physical path, regardless of the width of the physical interface. This enables the receiver to be able to correctly re assemble the aggregate channel by first de multiplexing the bits to re assemble the PCS lane and then re align the PCS lanes to compensate for any skew. The unique lane marker also enables the de skew operation in the receiver. Bandwidth for these lane markers is created by periodically deleting inter packet gaps (IPG). These alignment blocks are also shown in Figure 2. The receiver PCS realigns multiple PCS lanes using the embedded lane markers and then re orders the lanes into their original order to reconstruct the aggregate signal. Two key advantages of the PCS multilane distribution methodology are that all the encoding, scrambling and de skew functions can all be implemented in a CMOS device (which is expected to reside on the host device), and minimal processing of the data bits (other than bit muxing) happens in the high speed electronics embedded with an optical module. This will simplify the functionality and ultimately lower the costs of these high speed optical interfaces. The PMA sublayer enables the interconnection between the PCS and any type of PMD sublayer. A PMA sublayer will also reside on either side of a retimed interface, referred to as XLAUI (40 gigabit per second attachment unit interface) for 40 Gigabit Ethernet or CAUI (100 gigabit per second attachment unit interface) for 100 Gigabit Ethernet.
7 Figure 3 illustrates the general architecture for 100 Gigabit Ethernet, as well as examples of two other architectural implementations: Page 7 100GBASE LR4, which is defined as 4 wavelengths at 25 gigabit per second per wavelength on single mode fiber. 100GBASE SR10, which is defined as 10 wavelengths across 10 parallel fiber paths at 10 gigabit per second on multi mode fiber. These two implementations will be used to illustrate the flexibility needed to support the multiple PMDs being developed for 40 and 100 Gigabit Ethernet. LLC MAC Control (Optional) MAC Reconcilliation PCS PMA PMD MEDIUM CGMII MDI 100GBASE-R LLC MAC Control (Optional) MAC Reconcilliation PCS PMA 20:10 PMA 10:4 PMD MEDIUM CGMII CAUI MDI 100GBASE-LR4 LLC MAC Control (Optional) MAC Reconcilliation PCS PMA 20:10 PMD MEDIUM CGMII PPI MDI 100GBASE-SR10 Figure 3 Illustrations of 100GBASE R Architectures As described in the previous section, 20 PCS lanes are used for 100 Gigabit Ethernet. In the example implementation shown in the middle diagram of Figure 3, the two PMA sublayers are interconnected by the CAUI electrical interface., which is based on a 10 lane wide interface at 10 gigabit per second per lane. In this implementation the PMA sublayer at the top of the CAUI multiplexes the 20 PCS lanes into 10 physical lanes. The PMA sublayer at the bottom of the CAUI performs three functions. First, it retimes the incoming electrical signals. After the retiming the electrical lanes are then converted back to 20 PCS lanes, which are then multiplexed into the 4 lanes needed for the 100GBASE LR PMD. The implementation of the 100GBASE SR10 architecture, however, is different. In this implementation a host chip is directly connected to an optical transceiver that is hooked up to 10 parallel fiber paths in each direction. The PMA sublayer resides in the same device as the PCS sublayer, and multiplexes the 20 PCS lanes into the ten electrical lanes of the parallel physical interface (PPI), which is the non retimed electrical interface that connects the PMA to the PMD. In summary, the high level PMA functionality of multiplexing still exists but the actual implementation is dependent on the specific PMD being used.
8 Page 8 40 Gigabit Ethernet and 100 Gigabit Ethernet Interfaces The various chip interfaces in the IEEE Std 802.3ba 2010 amendment are illustrated in Figure 3. The IEEE Std 802.3ba 2010 amendment specifies some interfaces as logical, intra chip, interfaces, as opposed to a physical, inter chip, interfaces as they have been in the past. A logical interface specification only describes the signals and their behavior. A physical interface specification also describes the electrical and timing parameters of the signals. The inclusion of logical interfaces supports system on a chip (SoC) implementations where various cores, implementing the different sublayers, are supplied by different vendors. The provision of an open interface specification through the IEEE Std 802.3ba 2010 amendment will help integrate these cores into a SoC in the same way that chips from different vendors can be integrated to build a system. While a physical interface specification is sufficient to specify a logical interface, there are cases where the interfaces are unlikely to ever be implemented as a physical interface, making the provision of electrical and timing parameters unnecessary. There are three defined chip interfaces that have a common architecture for both speeds. The MII is a logical interface that connects the MAC to a PHY and the AUI is a physical interface that extends the connection between the PCS and the PMA. The naming of these interfaces follows the convention found in 10 Gigabit Ethernet, IEEE Std 802.3ae, where the 'X' in XAUI and XGMII represents the Roman numeral 10. Since the Roman numerals for 40 are 'XL' and the Roman numeral for 100 is 'C', the same convention yields XLAUI and XGMII for 40 gigabit per second and CAUI and CGMII for 100 gigabit per second. The final interface is the parallel physical interface (PPI), discussed in further detail below, which is the physical interface for the connection between the PMA and the PMD for 40GBASE SR4 and 100GBASE SR10 PMDs. 40 Gigabit Attachment Unit Interface (XLAUI) and 100 Gigabit Attachment Unit Interface (CAUI) The XLAUI, which supports the 40 gigabit per second data rate, and CAUI, which supports the 100 gigabit per second data rate, are low pin count physical interfaces that enables partitioning between the MAC and sublayers associated with the PHY in a similar way to XAUI in 10 Gigabit Ethernet. They are self clocked, multi lane, serial links utilizing 64B/66B encoding. Each lane operates at an effective data rate of 10 gigabit per second, which when 64B/66B encoded, results in a signaling rate of gigabaud per second. The lanes utilize low swing AC coupled balanced differential signaling to support a distance of approximately 25 cm. In the case of XLAUI, there are four transmit and four receive lanes of 10 gigabit per second, resulting in a total of 8 pairs, or 16 signals. In the case of CAUI, there are 10 transmit lanes and 10 receive lanes of 10 gigabit per second, resulting in a total of 20 pairs or 40 signals. These interfaces can serve as chip to chip interfaces. For example, they are used to partition system design between the largely digital based system chip and more analog based portions of the PHY chip, which are often based on different technology. In addition, while there is no mechanical connector specified for
9 XLAUI and CAUI in the IEEE 802.3ba amendment; these interfaces have also been specified for chip tomodule applications for pluggable form factor specifications, enabling a single host system to support the various PHY types through pluggable modules. The pluggable form factor specifications themselves are beyond the scope of IEEE and are being developed by other industry organizations. Page 9 Parallel Physical Interface (PPI) The PPI is a physical interface for short distances between the PMA and PMD sub layers. It is common to both 40 Gigabit Ethernet and 100 Gigabit Ethernet; the only differentiation is the number of lanes. The PPI is a self clocked, multi lane, serial links, utilizing 64B/66B encoding. Each lane operates at an effective data rate of 10 gigabit per second, which when 64B/66B encoded results in a signaling rate of gigabaud per second. In the case of the 40 Gigabit Ethernet, there are 4 transmit and 4 receive lanes of 10 gigabit per second, in the case of the 100 Gigabit Ethernet there are 10 transmit lanes and 10 receive lanes of 10 gigabit per second. Physical Media Dependent (PMD) Different physical layer specifications for computing and network aggregation applications have been developed. For computing applications, physical layer solutions will cover distances inside the data center for up to 100m for a full range of server form factors, including blade, rack, and pedestal configurations. For network aggregation applications, the physical layer solutions include distances and media appropriate for data center networking, as well as service provider inter connection for intra office and inter office applications. A summary of the physical layer specifications being developed for each MAC rate is shown in Table 2. At least 1m backplane 40 Gigabit Ethernet 100 Gigabit Ethernet 40GBASE KR4 At least 7m copper cable 40GBASE CR4 100GBASE CR10 At least 100m OM3 MMF 40GBASE SR4 100GBASE SR10 At least 150m OM4 MMF 40GBASE SR4 100GBASE SR10 At least 10km SMF 40GBASE LR4 100GBASE LR4 At least 40km SMF 100GBASE ER4 Table 2 IEEE 802.3ba Physical Layer Specifications
10 Page 10 BASE CR and 40BASE KR4 Physical Layer Specifications The 40GBASE KR4 PMD supports backplane transmission, while the 40GBASE CR4 and 100GBASE CR10 PMD support transmission across copper cable assemblies. All three of the PHYs leverage the Backplane Ethernet 10GBASE KR architecture, developed channel requirements and PMD. The architecture for the PHY types is shown in Figure 4. All three PHYs use the standard 40GBASE R and 100GBASE R PCS and PMA sublayers. The BASE CR and 40GBASE KR4 PHYs also include an autonegotiation (AN) sublayer and an optional FEC sublayer. The BASE CR and 40GBASE KR4 specifications also leverage the channel development efforts of the Backplane Ethernet project. The channel specifications for 10GBASE KR were developed to ensure robust transmission at 10 gigabit per second. The 40 Gigabit Ethernet and 100 Gigabit Ethernet PHYs apply these channel characteristics to 4 and 10 lane solutions. The BASE CR specifications also leverage the cable assembly specifications developed in support of 10GBASE CX4. For 40GBASE CR4, two connectors have been selected: the QSFP+ connector, which will support a module footprint that can support either copper based or gigabit per second optic based modules. The 10GBASE CX4 connector has also been selected, which will enable an upgrade path for those applications that are already invested in 10GBASE CX4 The effective data rate per lane is 10 gigabit per second, which when 64B/66B encoded results in a signaling rate of gigabaud per second. Thus, the 40GBASE KR4 and 40GASE CR4 PMDs support transmission of 40 Gigabit Ethernet over 4 differential pair in each direction over either a backplane or twin axial copper cabling medium, while the 100GBASE CR10 PMD will support the transmission of 100 Gigabit Ethernet over 10 differential pair in each direction for at least 7m over a twin axial copper cable assembly. Figure 4 Backplane and Copper Cable Architecture
11 BASE-SR, BASE-LR, and BASE-ER Physical Layer Specifications All of the optical PMDs share the common architecture shown in Figure 5. While they share a common architecture, the PMA sublayer plays a key role in transmitting and receiving the number of PCS lanes from the PCS sublayer to the appropriate number of physical lanes that are required per the PMD sublayer and medium. Page 11 Figure 5 40GBASE-R and 100GBASE-SR Architecture The different optical PMDs are: 40GBASE SR4 and 100GBASE SR10 PMD based on 850nm technology and supports transmission over at least 100m OM3 parallel fibers and at least 150m OM4 parallel fibers. The effective rate per lane is 10 gigabit per second, which when 64B/66B encoded results in a signaling rate of gigabaud per second. Therefore, 40GBASE SR4 supports transmission of 40 Gigabit Ethernet over a parallel fibre medium consisting of 4 parallel OM3 fibers in each direction, while the 100GBASE SR10 PMD supports the transmission of 100 Gigabit Ethernet over a parallel fibre medium consisting of 10 parallel OM3 fibers in each direction. 40GBASE LR4 based on 1310nm, coarse wave division multiplexing (CWDM) technology and supports transmission over at least 10km on SMF. The grid is based on the ITU G specification using wavelengths of 1270, 1290, 1310, and 1330nm. The effective data rate per lambda is 10 gigabit per second, which when 64B/66B encoded results in a signaling rate of gigabaud per second. This will help provide maximum re use of existing 10G PMD technology. In this way, the 40GBASE LR4 PMD supports transmission of 40 Gigabit Ethernet over 4 wavelengths on each SMF in each direction. 100GBASE LR4 based on 1310nm, dense wave division multiplexing (WDM) technology and supports transmission over at least 10km on SMF. The grid is based on the ITU G specification using wavelengths of 1295, 1300, 1305, and 1310nm. The effective data rate per lambda is 25 gigabit per second, which when 64B/66B encoded results in a signaling rate of gigabaud per second. In this way, the 100GBASE LR4 PMD supports transmission of 100 Gigabit Ethernet over 4 wavelengths on each SMF in each direction.
12 Page GBASE ER4 based on 1310nm, WDM technology and supports transmission over at least 40km on SMF. The grid is based on the ITU G specification using wavelengths of 1295, 1300, 1305, and 1310nm. The effective data rate per lambda is 25 gigabit per second, which when 64B/66B encoded results in a signaling rate of gigabaud per second. Therefore, the 100GBASE LR4 PMD supports transmission of 100 Gigabit Ethernet over 4 wavelengths on each SMF in each direction. To achieve the 40km reaches, it is anticipated that implementations will include semiconductor optical amplifier (SOA) technology. Conclusion Ethernet has become the unifying technology enabling communications via the Internet and other networks using IP. Its popularity has resulted in a complex ecosystem between carrier networks, data centers, enterprise networks, and consumers with a symbiotic relationship between the various parts. While symbiotic in nature, the different applications in the Ethernet ecosystem are growing at different rates: server and computing applications are growing at a slower pace than network aggregation applications. This divergence in growth rates spurred the introduction of two higher rates for the next generation of Ethernet: 40 Gigabit Ethernet for server and computing applications and 100 Gigabit Ethernet for network aggregation applications. This will enable Ethernet with its proven low cost, known reliability, and simplicity, to continue to evolve and be the ubiquitous connection for traffic on the Internet. About the Ethernet Alliance The Ethernet Alliance was formed by companies committed to the continued success and expansion of Ethernet technologies. By providing a cohesive, market responsive, industry voice, the Ethernet Alliance helps accelerate industry adoption of existing and emerging IEEE 802 Ethernet standards. It serves as an industry resource for end users and focuses on establishing and demonstrating multi vendor interoperability. As networks and content become further intertwined, the Ethernet Alliance works to foster collaboration between Ethernet and complementary technologies to provide a totally seamless network environment. To learn more, please go to
13 Glossary Page 13 AN Auto negotiation CAUI 100 gigabit per second Attachment Unit Interface CGMII 100 gigabit per second Media Independent Interface CWDM Coarse Wave Division Multiplexing FEC Forward Error Correction HSSG Higher Speed Study Group IEEE Standard the Ethernet Standard IEEE P802.3ba the project that developed the amendment to the Ethernet Standard for 40Gb/s and 100 Gb/s Ethernet IEEE Std 802.3ba 2010 the approved amendment to the Ethernet Standard for 40Gb/s and 100 Gb/s Ethernet IP Internet Protocol MAC Media Access Control Layer MDI Medium Dependent Interface MII Media Independent Interface MLD Multilane Distribution OTN Optical Transport Network PCS Physical Coding Sublayer PHY Physical Layer Devices Sublayer PMA Physical Medium Attachment Sublayer PMD Physical Medium Dependent Sublayer PPI Parallel Physical Interface RS Reconciliation Sublayer SoC System on a Chip WDM Wave Division Multiplexing XLAUI 40 gigabit per second Attachment Unit Interface XLGMII 40 gigabit per second Media Independent Interface
40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview November 2008 Authors: John D Ambrosia, Force10 Networks David Law, 3COM Mark Nowell, Cisco Systems 1. This work represents the opinions
CEI 28G: Paving the Way for 100 Gigabit June 2010 Authors: John D Ambrosia, Force10 Networks David Stauffer, IBM Microelectronics Chris Cole, Finisar 1. This work represents the opinions of the authors
Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Version 1.1 June 2010 Authors: Mark Nowell, Cisco Vijay Vusirikala, Infinera Robert Hays, Intel 1. This work represents
Trends In Data Rate And Link Length In Evolving Optical Standards David Cunningham 22 nd September 2013 Outline Building and Data Centre link lengths Trends for standards-based electrical interfaces Data
40 and 100 Gigabit Ethernet Overview Abstract: This paper takes a look at the main forces that are driving Ethernet bandwidth upwards. It looks at the standards and architectural practices adopted by the
Arasan Chip Systems Inc. White Paper 10 Gigabit Ethernet: Scaling across LAN, MAN, WAN By Dennis McCarty March 2011 Overview Ethernet is one of the few protocols that has increased its bandwidth, while
White Paper An Overview of Next-Generation 100 and 40 Gigabit Ethernet Technologies 26601 Agoura Road, Calabasas, CA 91302 Tel: 818.871.1800 Fax: 818.871.1805 www.ixiacom.com 915-0908-01 Rev D, September
100 Gbe and Beyond Diagram courtesy of the CFP MSA. v1.4 2011/11/21 Agenda Overview 28 Gbps Common Electrical Interfaces (CEI) 100GBASE-XXX and the 10x10 MSA New 100 Gbps Media Modules 100 GbE Developments
The Ethernet Roadmap Applications, servers, optics, and switches Scott Kipp April 15, 2015 www.ethernetalliance.org Agenda 4:40- :52 The 2015 Ethernet Roadmap Scott Kipp, Brocade 4:52-5:04 Optical Ethernet
10 Gigabit Ethernet (10GbE) and 10Base-T - RoadMap Ethernet (10 Mbps) wasn't fast enough. Fast Ethernet (100 Mbps) wasn't fast enough. Even Gigabit Ethernet (1000 Mbps) wasn't fast enough. So IEEE recently
Kevin Cui, Matt Yundt, Yajun Chen COEN 233 Computer Networks Spring 2013 HIGH SPEED ETHERNET: 40/100GE TECHNOLOGIES 1 TABLE OF CONTENTS HIGH SPEED ETHERNET: 40/100GE TECHNOLOGIES... 1 2 INTRODUCTION...
The ABC of Direct Attach Cables Thomas Ko, RCDD Product Manager TE Connectivity Network Speed and Technology 10/40/100 Gbps the transition is occurring right now What are Customers Saying about their 10/40/100
Introduction Migration Strategy for 0G and 00G Ethernet over Multimode Fiber There is a need for increased network bandwidth to meet the global IP traffic demand, which is growing at a rate of 0% per year
High Speed Ethernet Dr. Sanjay P. Ahuja, Ph.D. Professor School of Computing, UNF Hubs and Switches Hubs and Switches Shared Medium Hub The total capacity in the shared medium hub configuration (figure
Low Cost 100GbE Data Center Interconnect Chris Bergey VP Marketing, Luxtera firstname.lastname@example.org February 2012 1 Low Cost 100GbE Data Center Interconnect Where does 100GbE come into the data center Challenges
Ethernet/IEEE 802.3 evolution Pietro Nicoletti www.studioreti.it 8023-Evolution-Engl - 1 P. Nicoletti: see note pag. 2 Copyright note These slides are protected by copyright and international treaties.
Introduction By far, the most widely used networking technology in Wide Area Networks (WANs) is SONET/SDH. With the growth of Ethernet now into Metropolitan Area Networks (MANs) there is a growing need
Fast Ethernet and Gigabit Ethernet 1 Fast Ethernet (100BASE-T) How to achieve 100 Mbps capacity? MII LLC MAC Convergence Sublayer Media Independent Interface Media Dependent Sublayer Data Link Layer Physical
Fast Ethernet and Gigabit Ethernet Networks: Fast Ethernet 1 Fast Ethernet (100BASE-T) How to achieve 100 Mbps capacity? MII LLC MAC Convergence Sublayer Media Independent Interface Media Dependent Sublayer
WHITE PAPER www.brocade.com High- Performance Networks 40 Gigabit and 100 Gigabit Ethernet Are Here! The 40 Gigabit and 100 Gigabit Ethernet standards were adopted by the Institute of Electrical and Electronics
Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Version 1.0 August 2007 Authors: Mark Nowell, Cisco Vijay Vusirikala, Infinera Robert Hays, Intel ethernet alliance p.o.
High-Speed SERDES Interfaces In High Value FPGAs February 2009 Lattice Semiconductor 5555 Northeast Moore Ct. Hillsboro, Oregon 97124 USA Telephone: (503) 268-8000 www.latticesemi.com 1 High-Speed SERDES
Data Center Design for 40/100G With the continued requirement for expansion and scalability in the data center, infrastructures must provide reliability, manageability and flexibility. Deployment of an
Data Center: Technology Drivers Infrastructure Trends 40G / 100G Migration Maurice F. Zetena III Vice President Leviton Technology Data Center Group Leviton Network Solutions Technology Drivers Convergence
Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity Needs in the Data Center With the continued requirement for expansion and growth in the data center, infrastructures must provide
ARTIMY, 10 GIGABIT ETHERNET 1 10 Gigabit Ethernet: Bridging the LAN/WAN Divide Maen Artimy I. INTRODUCTION THE introduction of 10 Gigabit Ethernet standard extends the IEEE 802.3 family of protocols to
I n t e r n a t i o n a l T e l e c o m m u n i c a t i o n U n i o n ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU Series G Supplement 58 (02/2016) SERIES G: TRANSMISSION SYSTEMS AND MEDIA, DIGITAL
Broadcast video transport over OTN networks, C/DWDM and Dark Fibers A s we are witnessing the rapid evolution of the optical transport networks for high capacity data transport, it is time to pay attention
COMPARISON AND CONTRAST OF 25GBE AND GEN6 FIBRE CHANNEL Scott Kipp FCIA Roadmap Chair email@example.com September 2014 Kipp_01_0914_25Goptics 1 Fibre Channel and Ethernet Fibre Channel and Ethernet have
Bandwidth Growth and the Next Speed of Ethernet John D Ambrosia Chairman, Ethernet Alliance (Dell) firstname.lastname@example.org Scott Kipp President of the Ethernet Alliance (Brocade) email@example.com 1 Regarding
Arista 100G Transceivers and Cables: Q&A What 100G Transceivers and Cables are available from Arista? Arista supports a full range of copper cables and optical transceivers for 100GbE, compliant to the
82.3ap Auto-Negotiation with Clause 28 State Machines Presentation to IEEE 82.3ap Task Force July 24 Plenary Meeting 7//24 IEEE 82.3ap Task Force July 24 Plenary Meeting Page Contributors and Supporters
Ethernet Technology Allen Light, Broadcom Corp. SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations
White Paper Testing Scenarios for Ethernet at 10 Gb/s By Guylain Barlow Introduction As a natural evolution, the typical test concepts borrowed from lower-rate Ethernet apply to 10 Gigabit Ethernet (GigE).
Optimizing Infrastructure Support For Storage Area Networks December 2008 Optimizing infrastructure support for Storage Area Networks Mission critical IT systems increasingly rely on the ability to handle
Filename: SAS - PCI Express Bandwidth - Infostor v5.doc Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI Article for InfoStor November 2003 Paul Griffith Adaptec, Inc. Server
IEEE p1394c: 1394 with 1000BASE-T PHY Technology Kevin Brown firstname.lastname@example.org IEEE Working Group 1394c Charter Bob Davis, Chair of the Microcomputer Standards Committee of the IEEE approved the formation
1 Introduction Initially, network managers use 10 Gigabit Ethernet to provide high-speed, local backbone interconnection between large-capacity switches. 10 Gigabit Ethernet enables Internet Service Providers
Data Center Optimization: Component Choice and Innovative Design Combine to Allow the Creation of an Enabling Infrastructure Data Center Spaces Data Center Spaces as Defined by the TIA 942 Standard Source:
PHY DEFINITIONS Adhoc Contributors Brad Booth Ben Brown Steve Haddock Jeff Lynch Stuart Robinson Nader Vijeh Paul Bottorff Roy Bynum David Law David Martin Geoff Thompson Some definitions for us from 802.3
TransPacket white paper CWDM and DWDM networking 28.06.2011 Increasing fibre-optical network utilization and saving on switches/routers Executive summary From being primarily a technology for transport
Ethernet 102: The Physical Layer of Ethernet Scott Kipp President of the Ethernet Alliance (Brocade) Frank Yang Marketing Chair, Next Generation Cabling Subcommittee (CommScope) February 27th, 2012 1 The
Mixed High-Speed Ethernet Operations over Different Categories of Bundled UTP Cable June 10, 2010 Contributors: Yinglin (Frank) Yang CommScope, Inc. Charles Seifert Ixia Ethernet Alliance 3855 SW 153 rd
802.3 Higher Speed Study Group Objectives and Work Areas Mark Nowell, Gary Nicholl Cisco Systems Knoxville, September 20-21, 2006 1 Topics Market Drivers for 100GE Why 100GE? Some proposed objectives for
Local Area Networks Guest Instructor Elaine Wong Elaine_06_I-1 Outline Introduction to Local Area Networks (LANs) Network architecture Geographical area LAN applications LAN Technologies Ethernet Fiber
Arista 40G Cabling and Transceivers: Q&A 40G Cabling Technical Q&A Document 40Gigabit Cables and Transceivers Q. What 40G cables and transceivers are available from Arista? A. Arista supports a full range
IEEE 802.3 Ethernet The ubiquitous wired connectivity technology David Law IEEE 802.3 Working Group Chair David_Law@3Com.com Page 1 Before I Share My Views At lectures, symposia, seminars, or educational
Data Sheet Cisco 10GBASE X2 Modules Product Overview The Cisco 10GBASE X2 modules (Figure 1) offer customers a wide variety of 10 Gigabit Ethernet connectivity options for data center, enterprise wiring
Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices Robert Reid (Panduit) Frank Yang (CommScope, Inc) 1 The Presenters Robert Reid Sr. Product Development Manager at Panduit Frank Yang Marketing
100 Gigabit Ethernet Fundamentals, Trends, and Measurement Requirements By Peter Winterling Nearly every two years, a new hierarchy level is announced for telecommunications. The introduction of the 40
Uncompromising Integrity Making 100Gb/s deployments as easy as 10Gb/s Supporting Data Rates up to 100Gb/s Mellanox 100Gb/s LinkX cables and transceivers make 100Gb/s deployments as easy as 10Gb/s. A wide
Whitepaper 10 Things to Know Before Deploying 10 Gigabit Ethernet Table of Contents Introduction... 3 10 Gigabit Ethernet and The Server Edge: Better Efficiency... 3 SAN versus Fibre Channel: Simpler and
HIGH QUALITY ACCESSORIES AND INFRASTRUCTURE FOR DATA CENTRES PRODUCT CATALOG 12/2014 PRO-ZETA a.s. COMPANY PROFILE PROZETA is a high tech IT company founded in 1991, providing hardware distribution, connectivity
Pluggable Optics for the Data Center Pluggable Optics for the Data Center The data center ecosystem is going through unprecedented growth and innovation as new players, new business models and new technologies
Latest Trends in Data Center Optics IX Fórum 9 December 8, 2015 Christian Urricariet Finisar Corporation 1 Finisar Corporation Technology Innovator. Broad Product Portfolio. Trusted Partner. Optics industry
10 Gigabit Ethernet Copper-to-Fiber Media Converter - Open SFP+ - Managed StarTech ID: ET10GSFP This 10GbE fiber media converter lets you scale your network using the 10Gb SFP+ transceiver that best suits
- Data Communication and Networks (Optional) INTRODUCTION This is one of the optional courses designed for Semester 4 of the Bachelor of Information Technology Degree program. This course on Data Communication
An Oracle White Paper May 2012 Oracle Big Data Appliance: Datacenter Network Integration Disclaimer The following is intended to outline our general product direction. It is intended for information purposes
Fibre Channel Fiber-to-Fiber Media Converters CM-155-XX CM-131-XX Multi-mode to Single-mode series Single-mode to Single-mode series Low cost CCM-1600 Fibre Channel media converter modules by Canary Communications
Optics Update to Joint Proposal of May, 1995 Overview of approach Applications coverage Details (white paper) The following companies have indicated their support for the concepts outlined in this proposal
WP_LowLoss_D_B 10/13/14 2:08 PM Page 2 The Need for Low-Loss Multifiber Connectivity In Today s Data Center Optical insertion loss budgets are now one of the top concerns among data center managers, especially
DATA SHEET CISCO DWDM XENPAK OVERVIEW The Cisco Dense Wavelength-Division Multiplexing (DWDM) XENPAK pluggable allows enterprise companies and service providers to provide scalable and easy-to-deploy 10
Data Sheet Cisco Small Form-Factor Pluggable Modules for Gigabit Ethernet Applications The industry-standard Cisco Small Form-Factor Pluggable (SFP) Gigabit Interface Converter is a hot-swappable input/output
An Overview: The Next Generation of Ethernet IEEE 802 Plenary Atlanta, GA November 12, 2007 Contributors Howard Frazier, Broadcom Mark Nowell, Cisco Chris Cole, Finisar John D Ambrosia, Force10 Networks
SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.
Using FPGAs to Design Gigabit Serial Backplanes April 17, 2002 Outline System Design Trends Serial Backplanes Architectures Building Serial Backplanes with FPGAs A1-2 Key System Design Trends Need for.
Computer Networks Lecture 06 Connecting Networks Kuang-hua Chen Department of Library and Information Science National Taiwan University Local Area Networks (LAN) 5 kilometer IEEE 802.3 Ethernet IEEE 802.4
The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center Doug Coleman, Manager, Technology & Standards, Corning Cable Systems Optical connectivity with OM3/OM4 laser-optimized
TCP/IP Network Communication in Physical Access Control The way it's done: The security industry has adopted many standards over time which have gone on to prove as solid foundations for product development
Cloud-Based Apps Drive the Need for Frequency-Flexible Generators in Converged Data Center Networks Introduction By Phil Callahan, Senior Marketing Manager, Timing Products, Silicon Labs Skyrocketing network
Technical White Paper: 10GBASE-T SFP+ Transceiver Module February 24, 2016 10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling Enabling More Cat 6a Connectivity for 10GbE Networking
Security & Surveillance Cabling Systems Security and Surveillance Cabling Systems The video security industry is growing and ever-changing, offering a wealth of opportunity for today s security professionals.
The 40 Gigabit Ethernet Era in the Enterprise Applying Lessons Learned from the Transition to 10 Gigabit Ethernet White Paper With the ratification of the IEEE 802.3ab in 2010, 40GbE has displaced 10GbE
The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration Bill Charuk, SR Product Manager, Data Center Solutions Introduction With the advent of data centers in communication infrastructure, various
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
Gigabit Ethernet: Architectural Design and Issues Professor of Computer and Information Sciences Columbus, OH 43210 http://www.cis.ohio-state.edu/~jain/ 9-1 Overview Distance-Bandwidth Principle 10 Mbps
Lecture 4 Continuation of transmission basics Chapter 3, pages 75-96 Dave Novak School of Business University of Vermont Objectives Line coding Modulation AM, FM, Phase Shift Multiplexing FDM, TDM, WDM
White Paper Intel PRO Network Adapters Network Performance Network Connectivity Express* Ethernet Networking Express*, a new third-generation input/output (I/O) standard, allows enhanced Ethernet network