The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center



Similar documents
Data Center Design for 40/100G

Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity

TIA Releases Guidelines for Maintaining Polarity Using Array Connectors

Corning Cable Systems Optical Cabling Solutions for Brocade

Optical Trends in the Data Center. Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Cable Systems

Plug & Play Gigabit Ethernet Media Conversion Module. Application Note

10G and Beyond in the Data Center with OM3 Fiber

Enterprise Networks EMEA. Core Products FOR DATA CENTRES

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

How To Make A Data Center More Efficient

The migration path from 10G to 40G and beyond for InstaPATCH 360, InstaPATCH Plus and ReadyPATCH installations

Enterprise Networks EMEA. Core Products FOR DATA CENTRES

Optimizing Infrastructure Support For Storage Area Networks

The ABC of Direct Attach Cables

Corning Cable Systems Optical Cabling Solutions for Brocade

Pretium EDGE HD Solutions

Migration Strategy for 40G and 100G Ethernet over Multimode Fiber

The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration

Obsolete Fiber Technology? Not in my Data Center!

Patch Cords for Data Center Applications

Zone Distribution in the Data Center

Data Center Topology Guide

Keyed LC Connector Solution

Security & Surveillance Cabling Systems

Upgrading Path to High Speed Fiber Optic Networks

Design Guide. SYSTIMAX InstaPATCH 360 Traffic Access Point (TAP) Solution.

DATA CENTER APPLICATIONS REFERENCE GUIDE Networking & Storage

HUBER+SUHNER AG Phil Ward

Network Monitoring - Fibre TAP Modules

Overcoming OM3 Performance Challenges

Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

How To Make A Fibre Cable For A Data Centre

Resolution of comments 242 and 267 on Insertion loss measurements of installed fiber cables. Steve Swanson May 5, 2009

Unified Storage Networking

Tapping Technology. Just the Facts about the Pretium EDGE Tap Module

Table of Contents. Fiber Trunking 2. Copper Trunking 5. H-Series Enclosures 6. H-Series Mods/Adapter Panels 7. RSD Enclosures 8

Managing High-Density Fiber in the Data Center: Three Real-World Case Studies

AMP NETCONNECT Data Center Cabling Solutions

Fiber in the Data Center

Innovation. Volition Network Solutions. Leading the way in Network Migration through Innovative Connectivity Solutions. 3M Telecommunications

How To Properly Maintain A Fiber Optic Infrastructure

Cable Assemblies. Features and Benefits

Ethernet 102: The Physical Layer of Ethernet

Plug & Play Universal Systems

The Need for Low-Loss Multifiber Connectivity

Data Center Architecture with Panduit, Intel, and Cisco

HP Multimode OM3 LC/LC Optical Cables Overview. Models HP 50 m Multimode OM3 LC/LC Optical Cable

Smart Cabling: Constructing a cost effective reliable and upgradeable cable infrastructure for your data centre/enterprise network

24 fiber MPOptimate Configuration and Ordering Guide

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Optical Fiber. Smart cabling: constructing a cost-effective, reliable and upgradeable cable infrastructure for your enterprise network

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

The 8-Fiber Solution for Today s Data Center

Sustaining the Cloud with a Faster, Greener and Uptime-Optimized Data Center

Specifying Optical Fiber for Data Center Applications Tony Irujo Sales Engineer

A Comparison of Total Costs of Ownership of 10 Gigabit Network Deployments in the Data Center

40/100 Gigabit Ethernet: The Foundation of Virtualized Data Center and Campus Networks

Arista 40G Cabling and Transceivers: Q&A

Infrastructure for the next three generations

Your Network. Our Connection. 10 Gigabit Ethernet

AMP NETCONNECT CABLING SYSTEMS FOR DATA CENTERS & STORAGE AREA NETWORKS (SANS) High-density, High Speed Optical Fiber and Copper Solutions

Arista and Leviton Technology in the Data Center

over Ethernet (FCoE) Dennis Martin President, Demartek

Living Infrastructure Solution Guide.

Fiber Optic Infrastructure Application Guide

Preparing for Next Generation Data Center Infrastructure: Rick Dallmann: Sr. Data Center Architect CABLExpress

Trends In Data Rate And Link Length In Evolving Optical Standards

12 Fibre MTP Jumper, MTP (non-pinned) to MTP (pinned)

The Three Principles of Data Center Infrastructure Design WHITE PAPER

HIGHSPEED ETHERNET THE NEED FOR SPEED. Jim Duran, Product Manager - Americas WHITE PAPER. Molex Premise Networks

Low Cost 100GbE Data Center Interconnect

10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling

DATA CENTRES NETWORKS

FIBRE CHANNEL OVER ETHERNET

Unified Physical Infrastructure (UPI) Strategies for Data Center Networking

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

Cable Assemblies. Pretium EDGE Reverse Polarity Uniboots Photo LAN ST Compatible Ultra PC 12-Fiber Cable Assembly Photo LAN2784

SAN and NAS Bandwidth Requirements

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Latest Trends in Data Center Optics

LANscape Solutions for Xcelerated Bandwidth Applications LANscape Xcelerate and Xcelerate Plus Solutions

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

INTRODUCTION TO MEDIA CONVERSION

DATA CENTER NETWORK INFRASTRUCTURE. Cable and Connectivity for the Most Demanding Networks. BerkTekLevitonTechnologies.com

Specifying Optical Fiber for Data Center Applications Tony Irujo Sales Engineer

Pluggable Optics for the Data Center

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

MIGRATING TO A 40 GBPS DATA CENTER

High density Fiber Optic connections in the Data Center. Phil Ward Datacenter Product Market Manager. HUBER+SUHNER AG

Physical Infrastructure trends and evolving certification requirements for Datacenters Ravi Doddavaram

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

Evolution of Ethernet Speeds: What s New and What s Next

3G Converged-NICs A Platform for Server I/O to Converged Networks

Transcription:

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center Doug Coleman, Manager, Technology & Standards, Corning Cable Systems Optical connectivity with OM3/OM4 laser-optimized 50/125 μm multimode fiber has emerged as the choice media in the data center. 10GBASE-SR Ethernet is becoming the primary data rate for data centers in response to server virtualization, converged networks and the need to mitigate I/O server bottleneck. Data centers are deploying OM3/OM4 connectivity solutions to meet the 10G two-fiber serial transmission needs, as well as to provide for future migration to 40/100G parallel optics. High-port-count 10/40/100G electronics require utilization of high-density optical connectivity in the data center to facilitate ease of cable management, optimized pathway and space utilization as well as support green initiatives. The Need for Speed Server virtualization and converged networks drive the need for higher network data rates. Server virtualization increases utilization rates by integrating multiple applications on one server, as well as reducing the number of servers. The ability to support higher numbers of applications per server comes through technology enhancements in virtualization software and multicore processors (Figure 1). Where legacy servers have one application per server with typical 15 to 20 percent utilization, virtualized servers presently have the capability to support 20 to 25 applications, which can increase utilization to 80 to 90 percent. The expectation is that virtualized servers may support 100 applications in the near future. Running 25 applications on one physical server offers material and energy costs savings as it potentially eliminates 24 single application servers. Figure 1 Litcode-##-EN LAN-1382-EN Page 1

The increased number of applications per server generates the need for => 10G throughput. Depending on the bandwidth requirements for a server, an 8-core processor may be able to drive tens of Gb/s of bandwidth. This translates into the need for a highdata-rate network infrastructure to accommodate a much higher level of server I/O performance. Figure 2 provides a server connection speeds forecast (10G, 40G and 100G). The expectation is that 10G will have a rapid adoption in the next two years at the server and at network switches, such as the core and edge switches. Figure 2 Data centers utilize multiple networks that present operational and maintenance issues as each network requires dedicated electronics and cabling infrastructure. Ethernet and Fibre Channel are the typical networks, with Ethernet providing a local area network (LAN) between users and computing infrastructure, while Fibre Channel provides connections between servers and storage to create a storage area network (SAN). Standard activity has taken place to converge the two networks as Fibre Channel over Ethernet (FCoE). FCoE is simply a transmission method in which the Fibre Channel frame is encapsulated into an Ethernet frame at the server. The server encapsulates Fibre Channel frames into Ethernet frames before sending them over the LAN and de-encapsulates them when FCoE frames are received. The converged network makes use of low cost Ethernet electronics to transport Ethernet and Fibre Channel data. Table 1 provides the Fibre Channel Industry Association (FCIA) FCoE speed road roadmap. Where 10G FCoE utilizes serial duplex fiber transmission, 40/100G FCoE speeds will require parallel optics. FCoE Speed Roadmap Product Naming Throughput (MB/sec) Equivalent Line Rate (Gbaud) Year Tii Spec Completed Market Availability 10GFCoE 2400 10.313 2008 2009 40GFCoE 9600 41.225 TBD Market Demand 100GFCoE 24000 103.125 TBD Market Demand Table 1 Litcode-##-EN LAN-1382-EN Page 2

OM3- or OM4-Preferred Fibers in the Data Center OM3 and OM4 laser-optimized 50/125μm multimode fibers are the choice fiber type for connectivity in the data center. The fibers provide a significant value proposition when compared to single-mode fiber, as multimode fiber utilizes low cost 850 nm transceivers for serial and parallel transmission. The IEEE 802.3ba 40/100G Ethernet Standard was ratified in June 2010 and specified parallel optics transmission for multimode fiber. Parallel optic transmission is specified instead of serial transmission due to 850 nm VCSEL modulation limits at the time the guidance was developed. OM3 and OM4 are the only multimode fibers included in the standard. The 40/100G standard does not have guidance for CAT UTP/STP copper cable. Figure 3 40GBASE-SR4 Parallel Optics Figure 4 100GBASE-SR10 Parallel Optics Table 2 provides the OM3- and OM4-specified distances for Ethernet and Fibre Channel. Each distance assumes 1.5 db total connector loss with the exception of OM4 40/100G, which assumes 1.0 db total connector loss. OM3 and OM4 are fully capable to support legacy and emerging data rates such that a 15- to 20-year service life is expected for the physical layer. Litcode-##-EN LAN-1382-EN Page 3

850 nm Ethernet Distance (m) 1G 10G 40G 100G OM3 1100 300 100 100 OM4 1100 400 1 /550 2 150 150 1 Proposed distance for 10G standard 2 Engineered length 850 nm Fibre Channel Distance (m) 4G 8G 16G OM3 380 150 100 Table 2 OM4 480 190 125 High-Density Optical Connectivity Network switching products are available with 48 SFP+ port line cards that use more than a 1,000 OM3/OM4 fibers per chassis switch for 10G duplex fiber serial operation. Future 40/100G switches are projected to use more than 4,000 fibers per chassis where parallel optics is deployed. Network electronics high-fiber-count requirements demand highdensity cable and hardware solutions to maximize utilization of pathway and spaces, ease cable management and simplify connections into system electronics. Bend-optimized OM3/OM4 fiber offers significantly smaller cable diameters and hardware components that yield the highest connectivity density in the data center. Compared to traditional multimode fiber, bend-optimized OM3/OM4 fiber facilitates reduced trunk cable diameters of 15 to 30 percent and hardware patch panel densities of 4,000+ fibers. The reduced trunk cable diameter consumes less pathway and space, as well as supports more efficient usage of cable trays that results in major material cost savings (Figure 5). Figure 5 Data centers need to install high-density 12-fiber MPO trunk cables with OM3/OM4 fiber today. These can be used for duplex fiber serial transmission, providing an effective migration path to parallel optics that require an MPO interface into the switch electronics and the server NIC (Figure 6). Litcode-##-EN LAN-1382-EN Page 4

Figure 6 High-density modular 4U and 1U hardware patch panels easily support duplex fiber serial transmission and simplify migration to parallel optics with the usage of MPO/LC modules (Figure 7). MPO/LC modules are used to break out the 12-fiber MPO connectors terminated on a trunk cable into simplex- or duplex-style connectors. Simplex- and duplex-style patch cords then can be used to patch into system equipment ports and crossconnect patch panels. The MPO/LC modules are easily removed and replaced with MPO adapter modules as needed to begin parallel optics transmission. 40G multimode fiber transmission will use a 12-fiber MPO and 100G multimode fiber transmission will use a 24-fiber MPO connector at the transceiver interface. Figure 7 Patch panels have integrated trays that contain the MPO/LC modules. Each tray has four discrete MPO/LC modules to enhance modularity for moves, adds and changes. 4U and 1U patch panels have 12 and 2 trays, respectively. A 4U housing is typically used to connect into high-density electronics, as well as for cross-connects. A 1U housing is typically used for trunk cables to interconnect into top-of-rack edge switches. Figures 8 and 9 illustrate patch panel designs, and Table 3 provides patch panel fiber capacities. Litcode-##-EN LAN-1382-EN Page 5

Figure 8 4U Patch Panel Figure 9 1U Patch Panel 4U Housing 10G Ports 2-fiber LC Duplex 40G Ports 12-fiber MPO 100G Ports 2 x 12-fiber MPOs 10G Ports 24-fiber MPO Circuit Capacity 288 192 96 192 Fiber Capacity 576 2304 2304 4608 1U Housing 10G Ports 2-fiber LC Duplex 40G Ports 12-fiber MPO 100G Ports 2 x 12-fiber MPOs 10G Ports 24-fiber MPO Circuit Capacity 48 32 16 32 Fiber Capacity 96 384 384 768 Table 3 Litcode-##-EN LAN-1382-EN Page 6

The MPO/LC harness assembly has become a popular method for connections into high-port-count network switches. An MPO connectorized trunk cable terminates with the harness assembly at the network electronics patch panel. The harness assembly has an MPO connector on one end of the cable, while the other end is equipped with simplex- or duplex-style connectors. Compared to typical two-fiber jumpers, harness assemblies significantly reduce the bulk of cabling into the electronics to facilitate ease of management, as well as enhance cooling efficiency. In addition, the harness assembly can be configured with staggered, connectorized legs that match the pitch of the electronics line card (Figure 10). When converting to parallel optics, you would simply remove the harness assembly and replace with the appropriate MPO jumper cable. Figure 10 Conclusion Existing and emerging network technologies are driving the need for increased data rates and fiber usage in the data center. High-density optical connectivity solutions are essential to address these trends to optimized cable management and data center real estate usage, as well as provide an easy migration from duplex fiber serial transmission to 12- and 24-fiber parallel optics transmission. OM3/OM4 optical connectivity solutions are well prepared to meet these challenges. Litcode-##-EN LAN-1382-EN Page 7

notes Corning Cable Systems LLC PO Box 489 Hickory, NC 28603-0489 USA 800-743-2675 FAX: 828-325-5060 International: +1-828-901-5000 www.corning.com/cablesystems Corning Cable Systems reserves the right to improve, enhance and modify the features and specifications of Corning Cable Systems products without prior notification. LANscape and Pretium are registered trademarks of Corning Cable Systems Brands, Inc. MTP is a registered trademark of USConec, Ltd. All other trademarks are the properties of their respective owners. Corning Cable Systems is ISO 9001 certified. 2008, 2011 Corning Cable Systems. All rights reserved. Published in the USA. LAN-1382-EN / November 2011 LAN-1382-EN Page 8