Volume and Velocity are Driving Advances in Data Center Network Technology

Similar documents
High Speed Ethernet. Dr. Sanjay P. Ahuja, Ph.D. Professor School of Computing, UNF

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

The Ethernet Roadmap Applications, servers, optics, and switches

Pluggable Optics for the Data Center

Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices

Latest Trends in Data Center Optics

Evolution of Ethernet Speeds: What s New and What s Next

Your Network. Our Connection. 10 Gigabit Ethernet

MIGRATING TO A 40 GBPS DATA CENTER

Low Cost 100GbE Data Center Interconnect

Optical Trends in the Data Center. Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Cable Systems

Optimizing Infrastructure Support For Storage Area Networks

100 Gb/s Ethernet/OTN using 10X10 MSA Optical Modules

Migration Strategy for 40G and 100G Ethernet over Multimode Fiber

How To Make A Data Center More Efficient

White Paper. An Overview of Next-Generation 100 and 40 Gigabit Ethernet Technologies

The ABC of Direct Attach Cables

ethernet alliance Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Version 1.0 August 2007 Authors:

Trends In Data Rate And Link Length In Evolving Optical Standards

Optical Data Center Markets: Volume I Optical Opportunities Inside the Data Center

100 GBE AND BEYOND. Greg Hankins NANOG52. Diagram courtesy of the CFP MSA. NANOG /06/14

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

40 and 100 Gigabit Ethernet Overview

Specifying Optical Fiber for Data Center Applications Tony Irujo Sales Engineer

Cisco Small Form-Factor Pluggable Modules for Gigabit Ethernet Applications

100 GIGABIT ETHERNET TECHNOLOGY OVERVIEW & OUTLOOK. Joerg Ammon <jammon@brocade.com> Netnod spring meeting 2012

10G CWDM Conversion Technology

Cisco SFP Optics for Gigabit Ethernet Applications

40 Gigabit and 100 Gigabit Ethernet Are Here!

The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration

Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity

40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

Computer Organization & Architecture Lecture #19

Local Area Networks. Guest Instructor Elaine Wong. Elaine_06_I-1

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

WHITE PAPER. Enabling 100 Gigabit Ethernet Implementing PCS Lanes

HIGHSPEED ETHERNET THE NEED FOR SPEED. Jim Duran, Product Manager - Americas WHITE PAPER. Molex Premise Networks

100 Gigabit Ethernet is Here!

Bandwidth Growth and the Next Speed of Ethernet

Gigabit Ethernet: Architectural Design and Issues

CISCO 10GBASE X2 MODULES

Next Breakthrough Technologies: Trends in Datacenter for 100G and Beyond. Durgesh S Vaidya May 9, 2014

QuickSpecs. Models HP X132 10G SFP+ LC ER Transceiver. HP SFP+ Transceivers (SR, LRM, LR and ER) Overview

HIGH QUALITY FOR DATA CENTRES ACCESSORIES AND INFRASTRUCTURE. PRODUCT CATALOG 12/2014 PRO-ZETA a.s.

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Data Center Design for 40/100G

Arista 40G Cabling and Transceivers: Q&A

The 8-Fiber Solution for Today s Data Center

40 Gigabit Ethernet and Network Monitoring

Untangled: Improve Efficiency with Modern Cable Choices. PRESENTATION TITLE GOES HERE Dennis Martin President, Demartek

over Ethernet (FCoE) Dennis Martin President, Demartek

10 Gigabit Ethernet Cabling

40 Gigabit Ethernet and 100 Gigabit Ethernet Technology Overview

Using High Availability Technologies Lesson 12

White Paper. Best Practices for 40 Gigabit Implementation in the Enterprise

Preparing Infrastructure for 10 Gigabit Ethernet Applications

Ethernet 102: The Physical Layer of Ethernet

Higher Speed Cabling : Data Center Infrastructure Beyond 10...

Why 25GE is the Best Choice for Data Centers

Gigabit Ethernet Copper-to-Fiber Converters

Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches

QuickSpecs. Models HP X131 10G X2 CX4 Transceiver. HP X131 10G X2 Transceivers (SR, LRM, LR, ER and CX4) Overview. HP X131 10G X2 SC LRM Transceiver

10 Gigabit Ethernet (10GbE) and 10Base-T - RoadMap

Global Networking Services

Brocade Solution for EMC VSPEX Server Virtualization

Parallel SMF (PSM) in the Data Centre

Security & Surveillance Cabling Systems

Wavelength Division Multiplexing

Sustaining the Cloud with a Faster, Greener and Uptime-Optimized Data Center

Data Center Market Trends

White Paper Solarflare High-Performance Computing (HPC) Applications

Upgrading Path to High Speed Fiber Optic Networks

Unified Storage Networking

10G and Beyond in the Data Center with OM3 Fiber

GIGABIT ETHERNET. Greg Hankins UKNOF16. v /04/22

HP Multimode OM3 LC/LC Optical Cables Overview. Models HP 50 m Multimode OM3 LC/LC Optical Cable

Optical Fiber. Smart cabling: constructing a cost-effective, reliable and upgradeable cable infrastructure for your enterprise network

Municipal Operations Monitoring

How To Get A Better Signal From A Fiber To A Coax Cable

SummitStack in the Data Center

Enhanced Security in Data Center Connectivity

How To Use A Network Instrument Ntap

Ethernet Alliance Panel #2: Bandwidth Growth and The Next Speed of Ethernet

Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration

Fibre Channel Fiber-to-Fiber Media Converters

Features and Benefits

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Fibre Channel Fiber Cabling

Multiplexing. Multiplexing is the set of techniques that allows the simultaneous transmission of multiple signals across a single physical medium.

Copper to Fiber Stand-Alone Media Converter. Quick Installation Guide

Data center day. a silicon photonics update. Alexis Björlin. Vice President, General Manager Silicon Photonics Solutions Group August 27, 2015

INTRODUCTION TO MEDIA CONVERSION

Attaching the PA-A1-ATM Interface Cables

100BASE-SX Fast Ethernet:

The Conversion Technology Experts. Fiber Optics Basics

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

25GbE: The Future of Ethernet

Transcription:

US Headquarters 1000 N. Main Street, Mansfield, TX 76063, USA (817) 804-3800 Main www.mouser.com Technical Article Release Volume and Velocity are Driving Advances in Data Center Network Technology By Murray Slovick Photo: To handle "Big Data," data centers require thousands of servers. Just about everyone, and certainly every engineer, has heard of Moore s Law, in which Gordon Moore predicted that technological advances would lead to a doubling of the number of transistors on a chip approximately every two years. Fewer people have heard of its networking equivalent, Metcalfe's Law, formulated by Robert Metcalfe, which states that the value of a telecommunications network is proportional to the square of the number of connected users of the system. Simply put, the greater number of users of a networked service, the more valuable the service becomes to the community. Now think of the Internet of Things (IoT), in which the user need not be a human, but rather a machine. The Ethernet was developed as a system for connecting computers within a building using hardware running from machine to machine. It has evolved into a family of networking technologies and its latest iteration, the 40/100 Gigabit Ethernet (GE) standard known as IEEE 802.3ba, was written with data center communications in mind. continued

To minister to a high-speed world of constant connectivity, today s data center is home to thousands of host servers allocated as clusters. Each host consists of one or more processors, memory, network interface, and local high-speed I/O all tightly connected with a high-bandwidth network. The Ethernet serves as a cluster interconnect in the majority of cases (with InfiniBand in second place). Unprecedented Growth The data center industry is constantly growing, and at an accelerating rate as more of the world comes online and more businesses turn to the cloud for their data infrastructure. But perhaps more than any other factor, the IoT will have a potential transformational effect on the data center market, as well as its providers and technologies. The research firm Gartner, Inc. estimates that by 2020 the IoT will include 26 billion units installed generating an almost unfathomably large quantity of Big Data that needs to be processed and analyzed in real time. This data will represent an ever-larger proportion of the workloads of data centers, leaving providers facing new capacity, speed, analytics, and security challenges. Figure 1: Total of connected devices, billions of units/installed base. (Source: Gartner) The Necessary Bandwidth Search engine providers and other Big Data users (social media forums, online shopping sites, streaming video suppliers) pay a lot of money for thick pipes to connect their data centers. Using search engines as an example, thousands of servers in a data center index the entire Web by using keywords and metadata for Internet searching. Google indexes 20 billion pages each day. Once they have completed this task, these indexes have to be moved quickly to other data centers to remain relevant. The pipe connecting data centers must be large enough to accommodate these requirements, but after the indexes have been moved, pipe utilization drops and the servers, which now could be used for other jobs, can stall if the data does not move fast enough.

So bandwidth is one of the biggest considerations for Big Data. It s a simple, straightforward equation: the faster the connection, the better the service. Currently, 10-Gbit/s transmissions are the fastest Ethernet connections in widespread use. To put this into perspective, consider that most homes and businesses connect to Ethernet with a Category 5 Twisted Pair cable, which can transmit up to 1 Gbit/s. For their internal infrastructure, data centers are beginning to adopt the IEEE 802.3ba standard for 40- and 10- Gbit/s Ethernet connections 40 and 100x times faster, respectively, than the household twisted-pair cable. First defined by the IEEE in 2010, 100 Gigabit Ethernet (or 100GbE) and 40 Gigabit Ethernet (40GbE) represent the first instance in which two different Ethernet speeds were specified in a single standard. The decision to include both speeds came from pressure to support the 40-Gbit/s rate for local server applications while 100 GbE better targets network aggregation applications, such as service provider client connections, Internet backbones, network cores, etc. Two years ago the IEEE Bandwidth Assessment Report estimated that core networking bandwidth was doubling every 18 months, with server bandwidth doubling every 24 months. Deployment of 40- and 100-Gbit/s Ethernet links within data centers has mostly started where traffic is heaviest, such as from rack to rack within the center. Most centers are using 40GbE but with demand increasing, rapid migration to 100GbE is just a matter of time. Internet service providers have been installing 100GbE since it has been available on routers because they need the biggest pipes. Mobile device apps are also driving what is known as east-west traffic (traffic between and among servers or traffic from storage to server) instead of the traditional north-south traffic (client to server). According to Cisco, last year s mobile data traffic was nearly 18 times the size of the entire global Internet in 2000. One Exabyte (EB) of traffic traversed the global Internet in 2000 (1 EB equals 10 18 bytes or 1 billion gigabytes), and in 2013 mobile networks carried nearly 18 EB of traffic. Intel has calculated that for every 600 phones that are turned on, a whole server's worth of capacity has to be utilized to keep these phones fed. Every 120 tablets require another server's worth of capacity, and so do every 20 digital signs and every 12 surveillance cameras. At the Speed of Light Fiber-optic lines transfer bits and bytes as light pulses moving along a cable. In a data center, the data goes into racks connected to internal routers, which in turn, direct the information to servers. The IEEE 802.3ba standard allows multiple 10-Gbit/s channels to run in parallel or via wavelength division multiplexing (WDM), depending on whether they are single or multi-mode fiber (MMF) cables. The 10- Gbit/s channels are stacked to become 4x (40Gbps) or 10x (100 Gbit/s) faster. In most cases, MMF cables are used to provide the additional fiber strands needed for 40 to 100- Gbit/s connections. Engineers can find Fiber Optic Transmitters/Transceivers/Receivers on the Mouser website from suppliers including Avago, Emerson Connectivity, Omron, Sharp, Toshiba, and TT Electronics. With a larger core diameter, MMF cable permits multiple wavelengths of light to travel down its path. Single-mode optical fiber (SMF) is designed to carry light only directly down the fiber and is much narrower than MMF cables. SMF are better at retaining the fidelity of each light pulse over longer distances than multimode fibers, because intermodal dispersion cannot occur so there are fewer opportunities for the data to slow down.

Figure 2: The structure of a typical single-mode fiber: 1. Core 8 µm diameter; 2. Cladding 125 µm dia.; 3. Buffer 250 µm dia.; 4. Jacket 400 µm dia. (Source: Wikipedia) WDM splits multiple wavelengths into separate fibers for single-mode transfer. This allows more data to be transferred on a single cable by using different wavelengths (i.e., colors) of laser light for different pieces of information. A multiplexer and de-multiplexer, placed at either end of the cable, joins or splits this mixed-light signal. Engineers can find Ethernet media converter modules from suppliers such as Phoenix Contact that allow full duplex transmission from 10/100Base-TX (the fast Ethernet standard supported by the vast majority of Ethernet hardware currently produced) to individual simplex fiberglass with WDM technology. For example, the manufacturer s part 2902659 offers full-duplex communication with only one fiber and transmission ranges up to 38 km. Going the Distance Data centers are becoming massive in scale, occupying millions of square feet, requiring longer and longer reaches for connectivity. A typical cluster has several kilometers of fiber-optic cable acting as a highway system interconnecting racks of servers on the data center floor. The main barrier to adoption for 100-Gbit/s Ethernet connectivity has been not only the expense, but the lack of switch density. The distance between switches in modern data centers often is greater than 100 m; in many cases it can be 500 m and in some cases it can be up to a kilometer or more. This leaves an enormous opportunity for suppliers to develop high-speed, low-power optical links that can span a great distance in data centers while operating at data rates of up to 100Gbits/s. Several consortiums have recently emerged to satisfy data center operator demands for an affordable, lowpower 100 GbE optical interface that can reach beyond 100 m, which is in between the IEEE s 100GBase-SR4 specification that covers 100m reaches and 100GBase-LR4 that focuses on links up to 10 km. Intel and Arista (along with ebay, Altera, Dell, Hewlett-Packard and others) earlier this year formed an open industry group and a specification that addresses the up to 2km data center reach over a duplex, multimode fiber with 4 lanes of 25-Gbits/s light paths. The CLR4 100G alliance is designing an affordable, low-power optical interface for a Quad Small Form-factor Pluggable (abbreviated as QFSP or QFSP+) transceiver. Today's standard optics support 10 lanes of 10 Gbits/s, which leads to thicker, more expensive cables. The CLR4 100G group says its standard will reduce fiber count by 75 percent.

Mouser offers QFSP transceivers from Avago Technologies, Finisar, 3M and TE Connectivity. The compact QFSP+ form factor enables low power consumption and high density. For example, Finisar s FTL410QD2C QSFP+ transceiver module is designed for use in 40-Gbit/s links over parallel multimode fiber, including breakout to four 10-Gbps links. Figure 3: Finisar s FTL410QD2C QSFP+ transceiver module. (Source: Finisar) CWDM4 MSA (Coarse Wavelength Division Multiplexed 4x25G Multi-Source Agreement) is another group addressing 100GbE over 500 m to 2 km. The four members of the CWDM4 MSA (Avago Technologies, Finisar Corp., JDSU, and Oclaro) say they will offer interoperable 2-km 100G interfaces taking a 4x25G approach over duplex single-mode fiber (SMF). Six technology vendors have created the Parallel Single Mode 4-lane (PSM4) MSA Group, which will use a four-fiber, parallel approach to 100 Gbits/s in the data center. The companies (Avago Technologies, Brocade, JDSU, Luxtera, Oclaro, and Panduit) say that there is a need for PSM4 optical transceivers to fill the requirement for low-cost 100-Gbit/s connections at reaches of 500 m. More to Come The rapid growth of server, network, and Internet traffic is driving the need for ever-higher data rates, higher density, and lower cost optical fiber Ethernet solutions. To support evolving architectures, the IEEE is working on new physical layer requirements. This project aims to specify additions to and appropriate modifications of IEEE Standard 802.3 to add 100-Gbit/s Physical Layer (PHY) specifications and management parameters, using a four-lane electrical interface for operation on multimode and single-mode fiber-optic cables, and to specify optional Energy Efficient Ethernet (EEE) for 40- and 100-Gbit/s operation over fiber-optic cables. In addition, it will add 40-Gbit/s Physical Layer (PHY) specifications and management parameters for operation on extended reach (> 10 km) singlemode fiber-optic cables. Called the P802.3bm standard it is estimated to be complete in the first quarter of 2015. 400 Gbps is under development as the next Ethernet speed. Expected on the market after 2016, the IEEE 802.3 400 GE Study Group, formed in March 2013, is establishing initial objectives for 400GE

using OM3 or OM4 fiber and 25Gbps per channel, similar to the proposed P802.3bm standards. The new 400 GE standard is estimated to be complete by 2017. Data is being generated in unprecedented quantity. Research firm IDC predicts that the volume of data will double every 18 months. Twitter receives over 200 million tweets per day; Facebook collects more than 15 Tbytes every day. And the Internet of Things machine-generated data from sensors, devices, RFID, etc. could easily dwarf these numbers. But volume is only part of the equation. Velocity matters, too. As more entities engage in social media and tie into the Internet, real-time or near-real-time response becomes critical. This rising demand for web services and cloud computing has created need for large-scale data centers. But without big pipes and fast speeds, the data centers designed to cope with Big Data will drown in it.