100 Gb/s Ethernet/OTN using 10X10 MSA Optical Modules



Similar documents
How To Make A Data Center More Efficient

Your Network. Our Connection. 10 Gigabit Ethernet

100 GIGABIT ETHERNET TECHNOLOGY OVERVIEW & OUTLOOK. Joerg Ammon <jammon@brocade.com> Netnod spring meeting 2012

The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration

Low Cost 100GbE Data Center Interconnect

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices

SummitStack in the Data Center

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling

Latest Trends in Data Center Optics

Trends In Data Rate And Link Length In Evolving Optical Standards

40 Gigabit and 100 Gigabit Ethernet Are Here!

Optimizing Infrastructure Support For Storage Area Networks

Evolution of Ethernet Speeds: What s New and What s Next

CISCO 10GBASE X2 MODULES

100 GBE AND BEYOND. Greg Hankins NANOG52. Diagram courtesy of the CFP MSA. NANOG /06/14

Optical Trends in the Data Center. Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Cable Systems

SummitStack in the Data Center

White Paper Solarflare High-Performance Computing (HPC) Applications

Plug & Play Gigabit Ethernet Media Conversion Module. Application Note

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Pluggable Optics for the Data Center

The ABC of Direct Attach Cables

Ethernet 102: The Physical Layer of Ethernet

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

Volume and Velocity are Driving Advances in Data Center Network Technology

The Ethernet Roadmap Applications, servers, optics, and switches

Ethernet Alliance Panel #2: Bandwidth Growth and The Next Speed of Ethernet

QuickSpecs. Models HP X132 10G SFP+ LC ER Transceiver. HP SFP+ Transceivers (SR, LRM, LR and ER) Overview

HIGHSPEED ETHERNET THE NEED FOR SPEED. Jim Duran, Product Manager - Americas WHITE PAPER. Molex Premise Networks

Migration Strategy for 40G and 100G Ethernet over Multimode Fiber

MIGRATING TO A 40 GBPS DATA CENTER

Data Center Design for 40/100G

10 Port L2 Managed Gigabit Ethernet Switch with 2 Open SFP Slots - Rack Mountable

Parallel SMF (PSM) in the Data Centre

Specifying Optical Fiber for Data Center Applications Tony Irujo Sales Engineer

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Data Center Architecture with Panduit, Intel, and Cisco

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

GIGABIT ETHERNET. Greg Hankins UKNOF16. v /04/22

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

40 and 100 Gigabit Ethernet Overview

In Support of BMP and Economic Feasibility BRAD BOOTH, MICROSOFT SEPTEMBER 2014 KANATA, ON CANADA

Cisco Small Form-Factor Pluggable Modules for Gigabit Ethernet Applications

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Arista 40G Cabling and Transceivers: Q&A

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

A Comparison of Total Costs of Ownership of 10 Gigabit Network Deployments in the Data Center

QuickSpecs. Models HP X131 10G X2 CX4 Transceiver. HP X131 10G X2 Transceivers (SR, LRM, LR, ER and CX4) Overview. HP X131 10G X2 SC LRM Transceiver

Intel Ethernet SFP+ Optics

Cisco SFP Optics for Gigabit Ethernet Applications

10 Gigabit Ethernet: Scaling across LAN, MAN, WAN

The Need for Low-Loss Multifiber Connectivity

40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Obsolete Fiber Technology? Not in my Data Center!

Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity

Unified Storage Networking

HIGH QUALITY FOR DATA CENTRES ACCESSORIES AND INFRASTRUCTURE. PRODUCT CATALOG 12/2014 PRO-ZETA a.s.

Cisco Network Convergence System 6000 Series 10-Port 100-Gbps Multiservice Line Cards with Cisco AnyPort Technology

3Com Transceivers OVERVIEW. Standards-based flexible Ethernet connections KEY BENEFITS

High Speed Ethernet. Dr. Sanjay P. Ahuja, Ph.D. Professor School of Computing, UNF

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

Next Generation 100G Client Optics

Optical Fiber. Smart cabling: constructing a cost-effective, reliable and upgradeable cable infrastructure for your enterprise network

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM CALIENT Technologies

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

over Ethernet (FCoE) Dennis Martin President, Demartek

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Smart Cabling: Constructing a cost effective reliable and upgradeable cable infrastructure for your data centre/enterprise network

Untangled: Improve Efficiency with Modern Cable Choices. PRESENTATION TITLE GOES HERE Dennis Martin President, Demartek

Block based, file-based, combination. Component based, solution based

Corning Cable Systems Optical Cabling Solutions for Brocade

Arista and Leviton Technology in the Data Center

Upgrading Path to High Speed Fiber Optic Networks

The Impact of Cloud Computing to Data Network Cabling White Paper

Quick Reference Guide High Speed Input/Output Solutions

WHITE PAPER. Adapting Data Centers Using Fan-Out Technology

Optical interconnection networks for data centers

Transformation of the Enterprise Network Using Passive Optical LAN

Features and Benefits

Cisco SFS 7000P InfiniBand Server Switch

White Paper. An Overview of Next-Generation 100 and 40 Gigabit Ethernet Technologies

PE10G2T Dual Port Fiber 10 Gigabit Ethernet TOE PCI Express Server Adapter Broadcom based

HUBER+SUHNER AG Phil Ward

10 Gigabit Ethernet Cabling

Juniper Networks QFabric: Scaling for the Modern Data Center

10G and Beyond in the Data Center with OM3 Fiber

WHITE PAPER. Adapting Data Centers Using Fan-Out Technology

Flexible Modular Data Center Architecture Simplifies Operations

Bandwidth Growth and the Next Speed of Ethernet

10 Gigabit Ethernet (10GbE) and 10Base-T - RoadMap

HP Multimode OM3 LC/LC Optical Cables Overview. Models HP 50 m Multimode OM3 LC/LC Optical Cable

Brocade Solution for EMC VSPEX Server Virtualization

How To Understand The Benefits Of 16 Gbps Fibre Channel

Higher Speed Cabling : Data Center Infrastructure Beyond 10...

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Transcription:

100 Gb/s Ethernet/OTN using 10X10 MSA Optical Modules David Lewis, JDSU Jim Tavacoli, Santur Scott Kipp, Brocade Bikash Koley, Google Vijay Vusirikala, Google Executive Summary: 10X10 MSA based optical modules provide a low-cost, power-efficient, multisourced and interoperable solution that will accelerate the transition to 100Gbps technologies. The 10X10 MSA leverages mature 10G technologies to provide the lowest cost solution in terms of bandwidth per meter and bandwidth per watt. This white paper summarizes the drivers and design considerations behind the 10X10 MSA from various perspectives in the networking value chain large endusers, system vendors and component vendors.

Introduction The dramatic growth in innovative, bandwidth-intensive applications coupled with the migration from local compute/storage model to a cloud computing paradigm is driving the need for 100 Gbps interconnect solutions. The key impediment to accelerated adoption of 100 Gigabit Ethernet has been the high-cost and high-power footprint of the IEEE standardized 10 km client optical modules (100GBASE-LR4). Optical modules based on the 10X10 MSA address these shortcomings of the LR4 optical module and provide a multi-sourced, interoperable solution that is compatible with the CFP form-factor. This white paper describes the drivers and technical underpinnings behind the 10X10 MSA optical modules and provides detailed perspectives from large end users, system vendors as well as component vendors. Large Data Center Operator Perspective: Cost and Power Efficient Interconnect Scaling As computation and storage continue to move from desktops to large internet services or cloud services, computing platforms running such services are transforming into warehouse-scale computers (WSCs). These WSCs provide a ubiquitous, interconnected compute platform as a shared resource for many distributed services, and therefore are very different from the traditional rack of collocated servers in a data-center [1]. Interconnecting such WSCs in a costeffective yet scalable way is a unique challenge that needs to be addressed. Cost-effective and power-efficient 100 Gigabit Ethernet interfaces, that support more than 150 meters reach, are instrumental in scaling the interconnection within and between these WSCs. Figure 1 shows a common architecture of a WSC. A set of commodity servers are arranged into racks and interconnected through a Top Of Rack (TOR) switch. Rack switches are connected to cluster switches which provide connectivity between racks and form the cluster-fabrics for warehouse-scale computing.

Figure 1 - Typical elements in a Warehouse Scale Computer Ideally, the intra-datacenter switching fabric should have sufficient bi-sectional bandwidth to accommodate non-blocking connection from every server to every other server in a datacenter. Non-blocking connections between servers enable applications that do not require location awareness within a WSC infrastructure. However, such a design would be prohibitively expensive. More commonly, interconnections are aggregated with hierarchies of distributed switching fabrics with an over-subscription factor for communication between racks (Fig. 2) [2]. In order to scale the cross-sectional bandwidth at various layers of the hierarchy, the introduction of higher-speed optical interfaces is a necessary, provided these interfaces follow the following four rules [2]: 1. 10x speed @ 4x power dissipation 2. 10x speed @ 4x cost 3. Compatible with deployed fiber-infrastructure 4. Supported by multiple sources

(a) (b) Figure 2 - Hierarchies of intra-datacenter cluster-switching interconnect fabrics (a) within a single building (b) across multiple buildings At the initial phase of introduction, the higher-speed interfaces must not dissipate more power and cost more on a per-gbps basis compared to the previous generation lower-speed interfaces. With maturity, the interfaces need to follow the 10x @ 4x rule for both cost and power dissipation. This is illustrated by the speed with which 10Gbps optical interfaces became the mainstay for WSC interconnects with the introduction of SFP+ (10x speed for 1.5x power/cost compared to 1GbE interfaces). 100 Gbps optical interfaces need to follow the same scaling rule in-order to get wideacceptance in very large WSC deployments. Unfortunately, neither of the two main optical interface standards defined in IEEE802.3ba for 100GbE client interfaces (100GBASE-SR10 and 100GBASE-LR4) meets all of the four requirements outlined above. 100GBASE-SR10 suffers from requiring incompatible fiber-plant in the data-centers (expensive multi-mode ribbonfibers and MPO connectors) while 100GBASE-LR4 CFP interfaces currently scale to 10x speed for 100x cost and 17x power-dissipation compared to 10G SFP+ optical interfaces. It is necessary to define a solution today that is compatible with single-mode optical fiber plant, has at-least 2km reach and meets 10x speed for 10x power/cost scaling TODAY with a clear path towards 4x cost and power in 2012. This sets the target for the first-generation modules to no more than 15W power-dissipation and cost-parity on a gigabit per-second basis with equivalent 10G interfaces. The second generation modules need to meet 6W power consumption target in a smaller-formfactor by 2012. The 10X10 MSA has been established to define a 100Gbps interface standard to meet these requirements. First-generation 10X10 CFP modules have a considerable cost advantage over 100GBASE-LR4 CFP modules as shown in Figure 3. The second generation 10X10 module that is

Normalized cost per 10G being defined and referred to as the 10X10 High Density (HD) Form-factor module is expected to have a considerable cost advantage compared to the second generation 100GBASE-LR4 CFP2 module. The 10x10 HD form-factor modules are the only 100Gbps interface solution capable of meeting the need of very-large WSC interconnects with the right scaling factor in the next three to five years. 1.0 0.9 0.8 0.7 0.6 0.5 100GBASE-LR4 CFP 100GBASE-LR4 CFP2 with 50% lower price than 100GBASE-LR4 CFP 10x10MSA CFP breaks parity with SFP+ in outer years 0.4 0.3 0.2 0.1 0.0 2009 2010 2011 2012 2013 2014 2015 10GBASE-LR SFP+ baseline 10x10 MSA HD Formfactor breaks cost parity with SFP+. 10GBASE-LR SFP+ 100GBASE-LR4 CFP 100GBASE-LR4 CFP2 10X10MSA HD Form-factor 10X10MSA CFP Figure 3-100Gbps client interface price evolution normalized to existing 10Gbps module pricing System Vendor Perspective : Low Cost, Low Power and CFP Compatibility The key benefits from the system vendor perspective are that the 10X10 MSA based optical link provides better link distance than 100GBASE-SR10, is lower cost and power than 100GBASE-LR4 and fits in the CFP form factor. Many customers need to go farther than the 100 meters of 100GBASE-SR10 but cannot afford the high cost of 100GBASE-LR4. The 10X10 module in the CFP form factor can easily replace the 100GBASE-LR4 CFP and meet the needs of most users at a much lower cost.

1. Longer Link Distances The 10X10-2km link fills a gap between the 100m reach of 100GBASE-SR10 links on Optical Multimode 3 (OM3) fibers and the 10km reach of 100GBASE-LR4 Single Mode Fiber (SMF) links. While 100 meter links fulfill the needs of many applications, the link distances easily exceed 100 meters in large data centers. A quick analysis shows that the link lengths of data centers with over 50,000 square feet (sqft) will have a need for links longer than 100 meters. The maximum link length depends on the size of the data center. A simple geometric study shows that a moderately sized data center of 50,000 sqft could have links as long as 150 meters. Based on the formulas in Figure 4, a 75,000 sqft data center would be about 274 wide and long. If links are assumed to run orthogonal paths, then the link length would be 548 long in two dimensions. The data in Table 1 assumes that 25 of additional link length known as slack is used to run the cable up and down and through racks. With these assumptions, the link length is 573 or 174 meters. This is longer than the longest link supported by 100GBASE-SR10 that is 150 meters on OM4 fiber, so the end user must use 100GBASE-LR4 or the 10X10 solution. Figure 4 - Geometry illustrating the longest length in a large data center layout

Table 1: Link Length Comparison Data Center Size (sq ft) Data Center Size (sq m) Square Data Center Longest Cable Length (m) Rectangular Data Center Longest Cable Length (m) 10,000 918 68 72 30,000 2,755 113 119 50,000 4,591 143 151 75,000 6,887 174 184 100,000 9,183 199 211 125,000 11,478 222 235 150,000 13,774 242 257 200,000 18,365 279 295 300,000 27,548 340 360 400,000 36,731 391 414 The cabling costs for multi-mode fiber links based on 100GBASE-SR10 are considerably more expensive than for the single-mode links. For 100GBASE-LR10 solutions, 24 fiber ribbons are required to connect the modules while the 10X10 module and the 100GBASE-LR4 modules only require duplex single-mode fibers. Since 24 fiber ribbons cost tens of dollars more per foot than duplex single-mode fiber, the cabling cost difference of a 200 link can be several thousands of dollars. The 10X10 solution offers lower cabling costs than the 100GBASE-SR10 solution. If the end user requires the 10km link of 100GBASE-LR4, some vendors support a 10km version of the 10X10 solution. Modules that go beyond the 2km 10X10 specification but still meet the receiver requirements of the 10X10 MSA are considered compliant to the 10X10 MSA.

2. Lower Cost If a user has to go beyond 150 meters, then they must consider a single-mode solution and the cost difference is even larger between the 10X10 solution and the 100GBASE-LR4 solution. As shown on this blog How to Save $100,000 with a 10X10 Link, the end user cost of the 100GBASE-LR4 CFP module can be well over $100,000. Since the 10X10 CFP modules are less than half the price of the 100GBASE-LR4 module, the end user can save over $100,000 on a link that uses two modules. 3. Lower Power The 100GBASE-LR4 CFP module consumes about 21 Watts of power while the 10X10 CFP module consumes about 14 Watts of power. The 10X10 link saves about 7 Watts of power or 33% of the power used by the modules when compared to the 100GBASE-LR4 link. The power consumed by the module must be cooled and one rule of thumb is that an equal amount of power is consumed to cool the data center for each Watt of equipment consumption. 4. CFP Compatibility Another benefit from the system provider perspective is that the 10X10 module is compliant to the CFP specification and hence can be used in the same slot as the 100GBASE-LR4 solution. The 10X10 CFP module uses the same CAUI electrical interface, MDIO management interface and single-mode fiber as the 100GBASE-LR4 solution. The 10X10 module is thus interchangeable with a 100GBASE-LR4 module but provides a lower cost and lower power solution. Optical Component Vendor Perspective : Mature Technology with Lower Cost Points and Future Compatibility With the publication of 100 Gigabit Ethernet standards in IEEE802.3ba and the adoption of the OTU4 rate of 112 Gbps to directly carry 100 GbE, a new generation of optical modules have been developed for these 100 Gbps applications. First to be available is the CFP form-factor (Fig. 5), developed by the CFP MSA (http://cfp-msa.org), and intended for implementation of 40G and 100G standards up to and including 100GBASE-ER4 (40 km). The CFP electrical connector has pins for 10 lanes in each direction and is hence able to support 10-wide standards including 100GBASE-SR10 and 10X10 MSA implementations.

Figure 5 - CFP Module An appendix in the 10X10 MSA technical specification provides details of the electrical connections and MDIO NVR entries applicable to implementation of 10X10 in the CFP form factor. CFP 10X10 modules began shipping in 2010. Based on readily available 10 Gbps DFB laser arrays and PIN-TIA circuits, these modules will be the low cost leader until higher density modules appear. The CFP form-factor is large enough to handle the electrical interface signals and power consumption requirements of multiple 40 and 100 Gb/s standards up to and including the 40km 100GBASE-ER4. There are increasing user demands for higher density 100 Gb/s modules however and there is industry activity to standardize modules with footprints significantly smaller than the CFP. In order to achieve this smaller size, efforts are underway to miniaturize the optical subassemblies, utilize low power laser drivers and optical receivers, and to use nonretimed electrical interfaces such as CPPI which has the retimers outside of the optical module. Higher density versions of 10X10 MSA modules will become available in 2012 and are expected to enable large data centers to move to 100 Gb/s ports. Conclusion 10X10 MSA optical modules meet the key criteria that various industry stakeholders (end-users, system vendors and component manufacturers) are looking for to enable the wide spread adoption of 100 Gbps technologies. 10X10 optical modules provide (a) the low-cost and power efficiency that is lacking in currently standardized 100GBASE-LR4, (b) the reach and single-mode duplex fiber capability that is lacking in 100GBASE-SR10, (c) the form-factor compatibility with

CFP that makes system design simpler, and (d) compatibility with future higher-density formfactors. Equally importantly, 10X10 MSA enjoys broad support from a wide cross section of industry representatives with a very significant participation from major end-users and system vendors. This direct input from end-users for the definition of the technical specifications, eliminates the Lost in Translation issues that has plagued some past form-factors and optical PMDs and helps craft the most targeted and optimal solution. For more information on the 10X10 solutions, please visit www.10x10msa.org. References: 1. L.A. Barroso and U. Hölzle. The Datacenter as a Computer an Introduction to the Design of Warehouse-Scale Machines, Morgan & Claypool Publishers, 2009. http://www.morganclaypool.com/doi/pdf/10.2200/s00193ed1v01y200905cac006 2. B, Koley, Requirements for Data Center Interconnects, paper TuA2, 20th Annual Workshop on Interconnections within High Speed Digital Systems, Santa Fe, New Mexico, 3 6 May 2009.