Obsolete Fiber Technology? Not in my Data Center!



Similar documents
Optimizing Infrastructure Support For Storage Area Networks

The migration path from 10G to 40G and beyond for InstaPATCH 360, InstaPATCH Plus and ReadyPATCH installations

Data Center Topology Guide

Design Guide. SYSTIMAX InstaPATCH 360 Traffic Access Point (TAP) Solution.

Upgrading Path to High Speed Fiber Optic Networks

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

The Need for Low-Loss Multifiber Connectivity

40/100 Gigabit Ethernet: The Foundation of Virtualized Data Center and Campus Networks

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

Migration Strategy for 40G and 100G Ethernet over Multimode Fiber

Data Center Design for 40/100G

How To Properly Maintain A Fiber Optic Infrastructure

Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity

The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration

Living Infrastructure Solution Guide.

Zone Distribution in the Data Center

CommScope Intelligent Building Infrastructure Solutions (IBIS)

Plug & Play Gigabit Ethernet Media Conversion Module. Application Note

InstaPATCH Cu Cabling Solutions for Data Centers

Table of Contents. Fiber Trunking 2. Copper Trunking 5. H-Series Enclosures 6. H-Series Mods/Adapter Panels 7. RSD Enclosures 8

Corning Cable Systems Optical Cabling Solutions for Brocade

HUBER+SUHNER AG Phil Ward

Dat Da a t Cen t Cen er t St andar St d andar s d R o R undup o BICSI, TIA, CENELEC CENELE, ISO Steve Kepekci, RCDD

AMP NETCONNECT Data Center Cabling Solutions

Managing High-Density Fiber in the Data Center: Three Real-World Case Studies

MIGRATING TO A 40 GBPS DATA CENTER

Arista and Leviton Technology in the Data Center

How To Make A Data Center More Efficient

InstaPATCH ZERO Solution Guide

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

DATA CENTER NETWORK INFRASTRUCTURE. Cable and Connectivity for the Most Demanding Networks. BerkTekLevitonTechnologies.com

Corning Cable Systems Optical Cabling Solutions for Brocade

The ABC of Direct Attach Cables

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices

SYSTIMAX Solutions. Intelligent Infrastructure (ipatch) Solutions Guide

Design Guide. Universal Connectivity Grid. SYSTIMAX Solutions from CommScope

Best Practices for Ensuring Fiber Optic System Performance. David Zambrano

AMP NETCONNECT CABLING SYSTEMS FOR DATA CENTERS & STORAGE AREA NETWORKS (SANS) High-density, High Speed Optical Fiber and Copper Solutions

Understanding Fiber Polarity: Implications at 10G, 40G, & 100G

How To Create An Intelligent Infrastructure Solution

Data Center Infrastructure Solution

A Comparison of Total Costs of Ownership of 10 Gigabit Network Deployments in the Data Center

Overcoming OM3 Performance Challenges

Data Centre Cabling. 20 August 2015 Stefan Naude RCDD The SIEMON Company

Network Design. Yiannos Mylonas

Fiber in the Data Center

What testing is required for PREMISES Fiber Optic Cabling and the standards used

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

SummitStack in the Data Center

10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Specifying Optical Fiber for Data Center Applications Tony Irujo Sales Engineer

Considerations for choosing top-of-rack in today's fat-tree switch fabric configurations

Network Monitoring - Fibre TAP Modules

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

Physical Infrastructure trends and evolving certification requirements for Datacenters Ravi Doddavaram

Issues Affecting the Design and Choice of Cabling Infrastructures within Data Centres

Data Center. Pre-terminated. Patch Panel System. Cabling Systems Simplified. Patch Panels. 10G + Gigabit. Patch Cords. Plug & Play Installation

FDCCI is your mission. CommScope is your answer. Performance, security and efficiency that meet your targets. Data Center Solutions

Innovation. Volition Network Solutions. Leading the way in Network Migration through Innovative Connectivity Solutions. 3M Telecommunications

CommScope Multi-Tenant Data Center Solutions: A solid advantage from the critical infrastructure experts. Data Center Solutions

Performance Verification of GigaSPEED X10D Installations with Fluke Networks DTX 1800 CableAnalyzer

IBM TotalStorage SAN Switch F16

How To Increase Network Performance With Segmentation

ADC s Data Center Optical Distribution Frame: The Data Center s Main Cross-Connect WHITE PAPER

HIGHSPEED ETHERNET THE NEED FOR SPEED. Jim Duran, Product Manager - Americas WHITE PAPER. Molex Premise Networks

Optical Fiber. Smart cabling: constructing a cost-effective, reliable and upgradeable cable infrastructure for your enterprise network

Universal Wideband Edge QAM Solution. A New Way to Manage the Edge and Future-Proof Your Network

SummitStack in the Data Center

Data Center Market Trends

Unified Physical Infrastructure (UPI) Strategies for Data Center Networking

Unified Storage Networking

Physical Infrastructure trends and certification requirements for Datacenters. Psiber s Global Village

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

Data Center Network Infrastructure The Smart Choice For Your Data Center

Fibre Channel Fiber Cabling

Things You Must Know About Gigabit Ethernet 1. Understanding Gigabit Ethernet

Virtualizing the SAN with Software Defined Storage Networks

Low Cost 100GbE Data Center Interconnect

Fibre optic testing best practices

10 Gigabit Aggregation and Next-Gen Edge 96-Port Managed Switch Starter Kits

Data Center Cabling Design Trends

24 fiber MPOptimate Configuration and Ordering Guide

Uniprise Solutions. Ten Good Reasons To Choose CommScope.

What is a Datacenter?

Network Topologies and Distances

over Ethernet (FCoE) Dennis Martin President, Demartek

Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches

Enterprise Networks EMEA. Core Products FOR DATA CENTRES

Smart Cabling: Constructing a cost effective reliable and upgradeable cable infrastructure for your data centre/enterprise network

Transcription:

Obsolete Fiber Technology? Not in my Data Center! White Paper October 2011 Author: James Young Technical Director, Asia Pacific www.commscope.com

Do you remember when people used to say fiber optics are future proof. Well, it is and it isn t. Pay close attention to the capability of the fiber optic technologies and vendors you invest in. You should combine modern application warrantees to get the most from your investments in this critical part of the data center. Industry design standards need to evolve to match high performance fiber optic systems. As work continues eventually the ISO/EIA standards will catch up but, in the meantime, buyer beware. Centralized Network Architecture (CNA) or fiber to the desk was a typical example of a future proof design using fiber optics and yet, these networks have indeed become obsolete. Most network owners will have a similar experience in network backbones and data centers where earlier types of multimode fiber have over time seen useable distances decrease as bandwidth requirements have increased. Many customers are replacing OM2 fiber as it becomes clear that the future of this fiber is limited. Many of the optical connectors we used in data center networks such as Biconic, ST, MTRJ and even SC connectors are now obsolete or heading that way soon. Many believe the international standards for fiber network technologies are lagging the needs and realities of practical data center networks today. Market forces drive fiber and copper technologies to adapt to the new realities of higher bandwidth and density. In the past, customers believed they were purchasing the ultimate in future capabilities when they made their investments in CNA fiber networks. Today the application requirements are not aligned fully with industry standards and common practice. Understanding these gaps can help prevent investment in obsolete fiber optic technologies! Fiber optics, like all technology in the data center, evolves to keep pace with the relentless exponential increase of bandwidth requirements. What was once considered high bandwidth multimode fiber has been pushed to its bandwidth limits. However, multimode fiber has evolved, increasing its bandwidth capabilities. This evolutionary path is interesting in that it has revealed fiber s limitations more clearly. Higher bandwidth capacities are achieved through a series of small improvements and tweaks to optic components. Necessarily, a re-think of design and installation specification of fiber networks is required for a modern data center. Fiber Optic Cable by Distance and Speed OM1 OM2 OM3 OM4 1 Gb/s 300m 500m 860m 2 Gb/s 150m 300m 500m 4 Gb/s 70m 150m 380m 400m 8 Gb/s 21m 50m 150m 190m 10 Gb/s 33m 82m 150m 1 190m 1 16 Gb/s 15m 2 35m 2 100m 2 125m 2 As bandwidth increases imitations in fiber optic cable begins to show up. Longer cables increase the impact on signal quality. As cable length increases signals become unusable as cable limitations degrade the signal quality. New cables have been improved to provide support for higher bandwidth applications. Older cables must be removed and connectors need to provide better performance to meet the needs of modern data centers. OM1 and OM2 fiber types are now considered obsolete for data center applications. (Source Demartek) www.commscope.com 2

MPO connectors are available in 12 though 72 fiber versions Multimode fi ber cable standards (OM3 or OM4) are now the media of choice for most data center applications. OM3/4 fi bers provide the dramatic increases in bandwidth required to match the needs of next generation network solutions deployed today in support of 40 and 100 Gigabit speeds. That is not to say that each individual fi ber can support 40G or 100G over long distances; rather standards have evolved to allow multiple fi bers to work together to provide an aggregate bandwidth of N X 10G where N is the number of fi bers utilized. In this way 8 fi bers can work together to provide 40G bi-directional bandwidth. Similarly twenty fi bers can deliver 100G links. While grouping multimode fi bers together provides higher bandwidth, it creates some practical issues. We typically think of each fi ber links as two connectors Transmit and Receive. So how would we keep track of 8 fi bers or 20 fi bers? Fortunately a solution for handling multiple fi bers was invented many years ago by NTT. Multiple fi ber connectors like the Multifi ber Push-on (MPO) can organize these fi bers into physical groups that make logical and mechanical sense. The design of data centers requires careful planning and organization to ensure reliability and scalability. We have learned, sometimes the hard way, that ad-hoc cabling will eventually lead to operational disaster. The TIA 942(A) data center standard for example, depicts best practice for organizing a data center creating the concepts and rules used for a structured approach in data center cabling design. These concepts provide for patching fi elds to distribute resources within the data center. A practical fi ber solution must therefore support multiple connections in order to support these structured cabling designs organizing resources in the data center. FC Core/ Access s FC Core/ Access s IPASCSI Core IPASCSI Core Spine/ Distribution Spine/ Distribution Leaf /Access Leaf /Access... Storage s Primary Intelligent/iPatch Fiber Consolidation/ Cross Connect Field Sec Primary Intelligent/ Copper Consolidation/ Cross Connect Field Sec... OBM/ KVM Storage s OM3/OM4 Cat 6/6A Server Interconnect Infiniband/ etc. Server Interconnect Infiniband/ etc. Server Server Server Server Server Server Server Cabinet Cabinet Cabinet Cabinet Cabinet Cabinet Cabinet... www.commscope.com 3

Data centers are now being designed to be highly scalable. There are many potential applications, topologies and construction methods used today. Data center operators are looking for models that can minimize capital outlay and optimize operational costs. This has led many designers to look to modular systems that can scale, starting small but able to grow quickly and easily to support expansion. Solutions that can be changed, re-configured and reused offer the building blocks for these designs. Step 1 360G2-4U IP Shelf Servers InstaPATCH Plus Trunk Cables Storage Step 6 InstaPATCH Plus Trunk Cable Step 3 8 MPO Adapter Panels & 360G2-4U IP Shelf Step 4 Array Patch Cord Step 5 Step 2 InstaPATCH Plus DM2-24LC Module Fiber Patch Cord Cross-connect Director Rack In addition to patching flexibility and multiple fibers, the reach or distance support of fiber solutions adds greatly to the scalability and utility of the system. Consider larger data centers for example where multiple connection fields and large coverage areas are required. The maximum distance that a fiber cabling system can support becomes a primary consideration. Application specifications now define the cabling support 40G E and 100G E optics will require. We know that for simple links OM3 fiber can support distances of up to 100 meters point to point. OM4 fiber can similarly support distances of 150 meters. It is important to understand how this applies to a real world data center design, including multiple links and interconnections representing our zone and tiered infrastructures designs are now very rarely point to point configurations. Interconnection loss (the loss from each connector in the link) decreases the total distance or reach that can be supported. However the distance lost to interconnection loss does vary. Multiple links and patching must be included in system design calculations and the impact on reliability must be understood and managed. Applications such as 8G Fibre Channel have their own rules for deployment. For both Fibre Channel and Ethernet, the rules require high performance fiber and high performance connections to be useful in data centers using structured cabling. Just as low bandwidth fiber has become obsolete, the current ISO and TIA standard based connector performance has become obsolete. The current industry standards specify a connector loss performance of 0.75db per connector. The maximum application loss tolerated by modern high-speed applications could be reached with just two standard connectors. The good news is today we can easily exceed this Standard value for connector loss. The bad news is we cannot continue to refer to these standard values when we specify practical data center systems. A new method of specifying optical system performance is needed. www.commscope.com 4

The total link loss from end to end is obtained by adding up all the component losses. These loss values are specified by the manufacturer. The maximum loss must be kept below the application limits. Specifying definitive risk free designs is the intent of industry standards. End-users have not had to engineer their own solutions in the past because the standards provided engineering guidelines and common application support. Consider what an end-user might face without the benefit of a workable standard. A typical design example might require a link with 3 patch points and 160 meters of fiber. How far will this link support 8G Fiber Channel? What if we want to make changes in the future where more patching or longer distances are required? Who will certify new configurations? Will the application requirements be met and will it support all vendors products? What should our New Standard be? Defining the New Standard can be approached from a component perspective. For example the accepted standard value for loss is 0.75db for a connector, however, 0.2db or less is a routine value that we might expect to actually see. The application requirements will dictate the absolute maximum value for total system loss, which might be something less than 1.5db. So start with the total loss allowed, subtract the loss for the fiber we are using and then divide the balance among the connectors we need to use to see what connector performance we need. Finally we can compare the loss value to the stated claims from the cabling vendor. Note that the vendor must guarantee the maximum loss values as opposed to something more optimistic like average or typical numbers to ensure design conformance. In this approach the end-user designs the system and certifies the support for the application in question. The vendor usually takes no responsibility for overall system design but supplies much of the data needed to make the design decisions. Some end-users feel that there must be a better way to deal with this issue. Some manufacturers are responding with guaranteed application guides -an example is given below. Here the overall application support is described using the essential design elements of the data center topology. Application guides (like the 4G FC table on page 6) make it easier to design support for new applications. After evaluating the data center design requirements, length, number of connections and speeds can be determined. Data center designs conforming to the table are easily verified and guaranteed to support this application. www.commscope.com 5

4 Gigabit Fiber Channel, 850 nm Serial (FC-P14 400-MX-SN-I) Supportable Distance ft (m) lazrspeed 550 with lc connections # LC Connections* with: 1 MPO 2 MPOs 3 MPOs 4 MPOs 5 MPOs 6 MPOs 0 1310(400) 1310(400) 1310(400) 1310(400) 1310(400) 1310(400) 1 1310(400) 1310(400) 1310(400) 1310(400) 1310(400) 1310(400) 2 1310(400) 1310(400) 1310(400) 1310(400) 1310(400) 1250 (380) 3 1310(400) 1310(400) 1310(400) 1310(400) 1310(400) 1250 (380) 4 1310(400) 1310(400) 1310(400) 1310(400) 1250 (380) 1180 (360) 5 1310(400) 1310(400) 1310(400) 1250 (380) 1180 (360) 1180 (360) 6 1310(400) 1310(400) 1310(400) 1250 (380) 1180 (360) 1120 (340) Changes in data center design, applications and capacity occur frequently. A typical example would be an increase in bandwidth for storage applications, perhaps a move from 4G Fibre Channel to 8G Fibre Channel. The newer technologies provide for increases in throughput and in general as new generations of equipment are introduced the port density also tends to increase. When evaluating the deployment of the new equipment two questions come up. Will the current infrastructure support an upgrade in speed? Can I reach enough of the floor to make use of the increased number of ports in the new switch? Again, we can make use of the application tables to answer these questions. This table determines the impact of the new application. The maximum supported distance will decrease as speeds increase. The example shown is for use with OM4 fiber. The table indicates the maximum cable distance (6 LC and 6 MPO for example) as 150 meters for 8G Fibre Channel vs 340 meters with 4G Fibre Channel. While this is a large number of connections, it represents real world requirements for some applications. 8G Fiber Channel, 850 nm Serial Limiting receiver (FC-P14 800-MX-SN) Supportable Distance ft (m) lazrspeed 550 with lc connections # LC Connections* with: 1 MPO 2 MPOs 3 MPOs 4 MPOs 5 MPOs 6 MPOs 0 790 (240) 740 (225) 740 (225) 690 (210) 690 (210) 640 (195) 1 740 (225) 740 (225) 690 (210) 690 (210) 640 (195) 640 (195) 2 740 (225) 740 (225) 690 (210) 640 (195) 640 (195) 590 (180) 3 740 (225) 690 (210) 690 (210) 640 (195) 640 (195) 590 (180) 4 690 (210) 690 (210) 640 (195) 640 (195) 590 (180) 540 (165) 5 690 (210) 640 (195) 640 (195) 590 (180) 590 (180) 540 (165) 6 690 (210) 640 (195) 590 (180) 590 (180) 540 (165) 490 (150) www.commscope.com 6

Network planners and data center designers can use these tables to model future network upgrades. If plans include deploying higher speeds in the future then zone sizes and patching topologies can be chosen in advance to provide for these future applications. A future plan might include 16G Fibre Channel. The coverage area of director 16G Fibre Channel switches would be smaller when compared to 8G Fibre Channel. Again the application support table for 16G Fibre Channel can be consulted to determine the maximum reach and patching combinations available for 16G switches. The initial data center design topology can then be designed to support an eventual deployment of 16G technology. The same process applies to 40G and 100G Ethernet planning. Application tables provide the guidelines for design and deployment. 16 Gigabit Fiber Channel, 850 nm Serial (FC-P15 1600-MX-SN) Supportable Distance ft (m) lazrspeed 550 with lc connections # LC Connections* with: 1 MPO 2 MPOs 3 MPOs 4 MPOs 5 MPOs 6 MPOs 0 560 (170) 540 (165) 520 (160) 510 (155) 480 (145) 460 (140) 1 540 (165) 520 (160) 510 (155) 490 (150) 480 (145) 440 (135) 2 520 (160) 510 (155) 490 (150) 480 (145) 460 (140) 430 (130) 3 520 (160) 490 (150) 480 (145) 460 (140) 440 (135) 410 (125) 4 510 (155) 490 (150) 460 (140) 440 (135) 430 (130) 390 (120) 5 490 (150) 480 (145) 440 (135) 430 (130) 390 (120) 380 (115) 6 480 (145) 460 (140) 430 (130) 410 (125) 380 (115) 360 (110) *CommScope supplies these application tables depicting guaranteed application support. These tables uniquely describe the SYSTIMAX support capability which exceeds general industry standards. In conclusion, the design of fiber systems for the data center requires a new approach. The new approach must include optical system specifications that are directly linked to application based support for modern high-speed applications. A design approach that enables a clear understanding of design options is possible when manufacturers provide applications design guidelines for end-users. Specifications in terms of topology and reach are a more appropriate method for system designers and operators. Guaranteed support for these application guidelines ensures the system vendor shares the design and support responsibilities in an appropriate fashion. www.commscope.com Visit our Web site or contact your local CommScope representative for more information. 2011 CommScope, Inc. All rights reserved. All trademarks identified by or are registered trademarks or trademarks, respectively, of CommScope, Inc. This document is for planning purposes only and is not intended to modify or supplement any specifications or warranties relating to CommScope products or services. 10/11