10/40/100 MIGRATION IN DATA CENTERS The Three P s : Path, Polarity and Pins



Similar documents
Infrastructure for the next three generations

The migration path from 10G to 40G and beyond for InstaPATCH 360, InstaPATCH Plus and ReadyPATCH installations

OSMIUM HIGH DENSITY FIBER OPTIC CHASSIS

24 fiber MPOptimate Configuration and Ordering Guide

Migration Strategy for 40G and 100G Ethernet over Multimode Fiber

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

Obsolete Fiber Technology? Not in my Data Center!

The Need for Low-Loss Multifiber Connectivity

Managing High-Density Fiber in the Data Center: Three Real-World Case Studies

Understanding Fiber Polarity: Implications at 10G, 40G, & 100G

How To Make A Data Center More Efficient

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

TIA Releases Guidelines for Maintaining Polarity Using Array Connectors

Zone Distribution in the Data Center

AMP NETCONNECT Data Center Cabling Solutions

AMP NETCONNECT CABLING SYSTEMS FOR DATA CENTERS & STORAGE AREA NETWORKS (SANS) High-density, High Speed Optical Fiber and Copper Solutions

HUBER+SUHNER AG Phil Ward

Upgrading Path to High Speed Fiber Optic Networks

The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration

FTTH ARCHITECTURE WHITE PAPER SERIES

MIGRATING TO A 40 GBPS DATA CENTER

Optimizing Infrastructure Support For Storage Area Networks

Fiber Systems. Best Practices for Ensuring Polarity of Array-Based Fiber Optic Channels

Maintaining Proper Polarity for Modular Pre-terminated Fiber Systems. Rudolph A. Montgelas Fiber Optic Senior Product Manager Ortronics/Legrand

Loss & Continuity Testing with MPO/ MTP Connectors

Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity

Data Center Design for 40/100G

Data Center Market Trends

Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches

MTP Optical Fibre Configurations

Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices

ADC s Data Center Optical Distribution Frame: The Data Center s Main Cross-Connect WHITE PAPER

10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling

Arista 40G Cabling and Transceivers: Q&A

Living Infrastructure Solution Guide.

InstaPATCH Cu Cabling Solutions for Data Centers

Design Guide. SYSTIMAX InstaPATCH 360 Traffic Access Point (TAP) Solution.

High density Fiber Optic connections in the Data Center. Phil Ward Datacenter Product Market Manager. HUBER+SUHNER AG

The ABC of Direct Attach Cables

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Network Design. Yiannos Mylonas

Data Center Topology Guide

Considerations for choosing top-of-rack in today's fat-tree switch fabric configurations

Storage Area Management (SAM) Clutter-Free, Super-Efficient Network Infrastructure Management

Data Center Cabling Design Trends

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

The 8-Fiber Solution for Today s Data Center

Corning Cable Systems Optical Cabling Solutions for Brocade

Innovation. Volition Network Solutions. Leading the way in Network Migration through Innovative Connectivity Solutions. 3M Telecommunications

USER S MANUAL. Ethernet 10/100 Transceiver Series DL221/DL221A

DATA CENTER CABLING DESIGN FUNDAMENTALS

40/100 Gigabit Ethernet: The Foundation of Virtualized Data Center and Campus Networks

Faster Method for Connecting Customers in the Colocation Data Center

Living Infrastructure

How To Create An Intelligent Infrastructure Solution

Overcoming OM3 Performance Challenges

Fiber in the Data Center

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Technical White Paper for Multi-Layer Network Planning

Oracle Big Data Appliance: Datacenter Network Integration

Arista and Leviton Technology in the Data Center

Table of Contents. Fiber Trunking 2. Copper Trunking 5. H-Series Enclosures 6. H-Series Mods/Adapter Panels 7. RSD Enclosures 8

Plug & Play Gigabit Ethernet Media Conversion Module. Application Note

Tools for Building a Better Network

Best Practices for Ensuring Fiber Optic System Performance. David Zambrano

ADC KRONE s Data Centre Optical Distribution Frame: The Data Centre s Main Cross-Connect WHITE PAPER

Implementing Fast Ethernet

The Advantages of ManualManage and Network Management in a Test Lab

Integrating Oracle's Exadata Database Machine with a Data Center LAN Using Oracle Ethernet Switch ES2-64 and ES2-72 ORACLE WHITE PAPER MARCH 2015

Enabling 21 st Century Connectivity

Dat Da a t Cen t Cen er t St andar St d andar s d R o R undup o BICSI, TIA, CENELEC CENELE, ISO Steve Kepekci, RCDD

Eight Ways Better Software Deployment and Management Can Save You Money

Enhanced Category 5 Cabling System Engineering for Performance

Network Monitoring - Fibre TAP Modules

Oracle Exalogic Elastic Cloud: Datacenter Network Integration

Data Center Infrastructure Development

Integrate 10 Gb, 40 Gb and 100 Gb Equipment More Efficiently While Future-Proofing Your Fiber and Copper Network Infrastructure

How To Make A Fibre Cable For A Data Centre

Resolution of comments 242 and 267 on Insertion loss measurements of installed fiber cables. Steve Swanson May 5, 2009

Installation Guide for GigaBit Fiber Port Aggregator Tap with SFP Monitor Ports

ADC-APC Integrated Cisco Data Center Solutions

Secure/Keyed LC System Solutions Overview

4 Small Cables Sealing Kit: Wrap Style

Datacenter Performance

Simplifying Data Center Network Architecture: Collapsing the Tiers

Installation Guide for. 10/100 to Triple-speed Port Aggregator. Model TPA-CU Doc. PUBTPACUU Rev. 1, 12/08. In-Line

Specifying an IT Cabling System

Enterprise Networks EMEA. Core Products FOR DATA CENTRES

Physical Infrastructure trends and evolving certification requirements for Datacenters Ravi Doddavaram

Connector and Cable Specifications

Design Guide for Photonic Architecture

APPLICATION NOTE. Atmel AT04389: Connecting SAMD20E to the AT86RF233 Transceiver. Atmel SAMD20. Description. Features

Best Practices in SAN Migration

PCI Express* Ethernet Networking

Excellence, that connects. Sachsenkabel data center

GigaSPEED X10D Solution How

Industrial Ethernet How to Keep Your Network Up and Running A Beginner s Guide to Redundancy Standards

Data Center Fiber Cabling Topologies and Lengths. Paul Kolesar IEEE HSSG, September 2006

Corning Cable Systems Optical Cabling Solutions for Brocade

Transcription:

0/40/00 MIGRTION IN DT CENTERS ENTERPRISE NETWORK SOLUTIONS /// WHITE PPER

0/40/00 Migration in Data Centers FORWRD-LOOKING DT CENTER DESIGNERS ND MNGERS RELIZE THT, NOW THT THE STNDRDS RE LRGELY DEFINED, MIGRTION OF ETHERNET LINKS FROM 0 G/S TO 40 ND 00 G/S S WELL S FIRE CHNNEL LINKS FROM 4 TO 8 TO 6 G/S IS RPIDLY CCELERTING. With further interesting wrinkles such as high density 0 Gigabit links (four 0 Gb/s channels combined in a single 40 Gb/s QSFP port and ten 0Gb/s channels combined in a single 00G port), physical aggregation layers, and direct-attach circuits, having a clear and defined plan is not just prudent, it is imperative. Fortunately a few simple rules can reduce much of the uncertainty in this process. ackground Today, most fiber optic Ethernet channels operate at 0 G/s or less, and operate over a pair of multimode optical fibers, one for transmit and one for receive. The connector of choice for these channels is a duplex LC-type connector on active electronic ports as well as those in the physical cable plant. In the LC connector, one fiber acts as a transmit fiber and one as a receive, schematically: These pairs are often combined over trunk cables of, 24 or even much higher fiber counts using cassettes that combine and split out the pairs or other passive branching devices. (Some power users prefer 92 or even 432 fiber trunks to save space in their path.) The next step in data rate, 40 G, requires a different physical architecture. ecause 40 G channels are composed of four parallel duplex channels (a total of eight fibers) the preferred connector interface is now the MPO-type multiple fiber connector. This connector contains one or multiple rows of parallel fibers, usually in multiples of twelve fibers. singlerow MPO connector would transmit over eight of the twelve fibers at 40G while leaving the remaining four dark. Multiple channels can be combined on higher fiber count MPOs. For instance, three 8-fiber 40Gbs channels can be combined over a single tworow, 24 fiber MPO connector without abandoning any of the fibers in the connector or associated trunk cable. PORT PORT 2 PORT 3 The next step in Ethernet communication, 00 G takes the parallel 0 G methodology a step further, combining 0 parallel duplex channels over 20 fibers to form 00 G connections. The connector of choice in this case is a two-row 24 fiber MPO connector, in which 20 of the 24 fibers are utilized and the four outermost fibers remain dark. (There is activity in some standards organization to implement 00 G Ethernet over four 25Gbs parallel duplex links, or a total of eight fibers in which case the 00 G implementation looks very much like the 40 G system noted above. t this time, however, 25 G transmission has yet to be standardized. To minimize long term risk a prudent migration plan must take into account that one or even both solutions will be implemented somewhere in the future.) nother leap, to 400 G, is in the initial planning stages. Possible implementations include 32 multimode fibers in parallel or one or more single mode pairs to support transmission at this data rate. There are many technical and economic issues to be resolved, and these data rates will probably not be implemented in any significant volume until later in this decade. ENTERPRISE NETWORK SOLUTIONS /// WHITE PPER

0/40/00 Migration in Data Centers Physical Layer Planning Often neglected, physical layer planning can mean the difference between a smooth, logical migration and disorder just this side of chaos. With multiple options for data rate, connectivity and trunking options, trying to configure a data center s physical layer mapping on the fly can rapidly turn into a self-defeating exercise. Fortunately, by following a few simple rules, much of the guesswork can be eliminated, while maintaining flexibility and the ability to migrate a small or large portions of your data center as your demand and your budget dictate. No matter what you have to have a plan. Rule : Have a plan, develop it carefully, and stick to it! One of the simplest rubrics available is one called The Three P s path, polarity and pins. Having a proper plan for the Three P s will get you a long way toward a smooth migration plan. PTH Put simply, Path defines which groups of fibers at a given end of a link connect to which fiber groups at the other end. This includes all components in between, including patch cords, branching assemblies or devices, trunks and hydra assemblies, and all of the associated hops or transpositions. n example path, in this case corresponding to that outlined in the TI-568.3-D Draft.0 is shown below. 0G Switch 0G Server 0G Switch 0G Switch 0G Switch 0G Switch 0G Switch - 24F LC-MPO lade or Cassette -/- (Straight) ( Flip) 24F Fiber MPO Trunk Polarity (Type ) ( Flip) 24F LC-MPO lade or Cassette -/- (Flipped) - 0G Server 0G Server 0G Server 0G Server 0G Server This path delineates an interconnection between transceivers on a piece of active electronics (for example a core switch) and a device at the other end (e.g. a server.) Components include patch cords, blades or cassettes (which could be replaced by hydra-type assemblies) and trunk cables. In a similar manner, a cross-connect can be represented the same way: In this case there are two trunk cables with a patching array between them (represented at the left). Draft and released standards exist for these implementations, from the TI and other organizations. 0G Switch - ( Flip) ( Flip) ( Flip) 24F LC-MPO lade or Cassette -/- (Straight) 24F Fiber MPO Trunk Polarity (Type ) 24F LC-MPO lade or Cassette -/- (Flipped) 0G Switch - ( Flip) ( Flip) - 24F LC-MPO lade or Cassette -/- (Straight) 24F Fiber MPO Trunk Polarity (Type ) 24F LC-MPO lade or Cassette -/- (Flipped) PGE 3

0/40/00 Migration in Data Centers Rule 2: Develop your plan using industry standards. Don t try to develop practices and plans on the fly. This is rarely an effective approach. Of course, standards often have different options, usually developed to support different design approaches or networking philosophies. Rule 3: If the standard has options, pick the option that works best for you and stay with it. Certainly different sections of a data center may require different practices. Interconnection at an aggregation point may be different than SN connections, for instance. ut for any given function within a data center, the designs and practices should remain constant. NSI/TI-942-, as standard for data center design, provides excellent guidance in this area. So, the net effect of the polarity of the cabling infrastructure is that the pair must be flipped in that the transmit at one end is connected to the receive at the other. 0 G channel, implemented with duplex LC connectors would be represented schematically this way, looking into the end of the connectors that plug into the active electronics. There are a number of options to achieve this: One could make Consistency in choice of path removes a number of practical issues in data center operations. For example, if some cable plant uses single row fiber MPO connectivity, and some use dual row 24 fiber connectivity, a problem can arise in that single and dual row MPO s share a standard outer mating mechanism, but the rows of fibers within the connector to not align between the two. END "" END "" FIER MPO 24 FIER MPO So a fiber MPO can plug into a 24 fiber MPO, but no light will pass through as the fibers are not aligned. In some cases damage can result as well. While color coding of strain reliefs or other visual indicators may be used, they are not always effective, especially in heavily packed connection fields or poorly lit cabinets. Mixing of pathway types in the same field invites operational difficulties. POLRITY Polarity differs from path in the sense that polarity defines the relationships of the fibers within the path. specific transmit fiber at one end must connect to a specific receive fiber at the other end. In single pair applications, the transmit fiber of the pair on the active electronics is, by standard, always on the left looking into the transceiver (/). certain that the infrastructure (trunks plus cassettes, blades or other branching devices) always flips the pairs. second would be to use a straight infrastructure with a patchcord that is straight through at one end and a second patchcord that is flipped at the other end. In a cross connect scenario, this can get even more complicated as there are more flips enforced by the patch fields. If the infrastructure is designed to migrate to higher data rates, where the fiber groupings are now multiples of eight or even twenty, yet more complications ensue. Clearly, minimizing the variations in the infrastructure are important to design and installation, and especially during upgrades, migration and troubleshooting. Reducing variations is critical. Rule 4: Minimize the cabling polarity variants within your infrastructure, and make certain polarity choice does not limit options of the devices you can employ within the infrastructure. Which leads immediately to: ENTERPRISE NETWORK SOLUTIONS /// WHITE PPER

0/40/00 Migration in Data Centers Rule 5: ll trunks or components that act as trunks should have uniform polarity. There are several reasons for this. If trunks are uniform, changes to pathways due to changes in topology can be implemented through the devices that plug into the trunks such as cassettes, blades or hydra assemblies. It is much easier to change out the device at the end of the trunk than to have to re-enter the pathways to replace a trunk. Upgrades to higher data rates can be accomplished through a reduced set of components at the end of the trunks as well. Finally, in cascaded topologies such as cross connects, where more than one trunk (or component that acts as a trunk) may be present, choices of end devices can still be standardized. In 40 and 00 Gbs applications any devices self-discover meaning that the electronics at the ends of the paths recognize these fiber interconnections. So why would a designer worry about polarity? There are a number of reasons: If the infrastructure must accommodate mixed data rates with different connectivity (LC vs. MPO) then duplex polarity is still important. Emerging applications such as high density 0 G channels, where four 0 G circuits are shared over a single, eight-fiber 40G MPO port at the switch, require consistent polarity at the end devices where the 0 G channels are broken out. In applications where multiple channels are shared over a higher fiber-count backbone, such as three 40G channels over a 24 fiber trunk, six over a 48 fiber etc. there must be consistency when these channels are assembled at one end and broken out at the other. In some cases, data rates may be lowered for a given backbone due to changes in topology or data center layout. In these cases there may be a transition from a higher speed and fiber count channels to multiple lower speed fiber-count channels. Flexibility in choice of end devices, paradoxically, is enhanced by having greater uniformity in relatively fixed infrastructure elements such as trunks. With this in mind: Rule 6: Once you have chosen your trunk polarity, choose uniform designs and components for 0G, high-density 0G, 40G and 00 G channels. e certain that migration to higher and lower data rates is possible. Rule 7: Channel designs should account for future uncertainties (e.g. eight or twenty fiber solutions for 00 G channels). Polarity mapping can become quite complicated without uniform design practices. Trying to figure out polarities each time a network change occurs is timeconsuming, frustrating and prone to error. Solid design practices eliminate these unnecessary distractions. PINS There is an additional wrinkle to the discussion. ecause of the large number of fibers in MPO connectors, arrayed over a very small space, these connectors required enhanced alignment mechanisms. The MPO interface has a pair of high precision alignment pins on one half of the connection and mating high precision holes on the other half. The connectors are the dimensionally identical otherwise, and a pinned connector can reside on either side of the MPO interface. While the MPO is a time-proven and highly effective interface, this leads to a practical issues: Pinned and non-pinned connectors are not differentiated other than by the presence of pins. Transceiver ports on active equipment such as switches and servers virtually always have pins inside the active port. Plugging pinned patchcord plugs into these pinned transceiver ports can, in some cases, cause pin damage. Plugging pinned plugs into pinned plugs can, in some cases, cause pin damage. There is often no way to know if the connector on the back of a passive panel interface has pins or not. Non-pinned connectors can plug into non-pinned connectors. While damage is unlikely, without the alignment pins, optical performance will be sharply degraded, usually to the point of loss of optical connection. Which implies: PGE 5

0/40/00 Migration in Data Centers Shown schematically: So keeping track of pins in one s infrastructure is important. The infrastructure needs to be uniform and predictable. Rule 8: Have a uniform practice for MPO pins by data rate for both trunks and patchcords. nd installation should be simplified. Rule 9: Have a uniform practice for MPO pins on trunks. Trunks should have the same connectors on both ends. Suggestion: Non-pinned trunks reduce the risk of pin damage during installation. Having the pins on the mating devices, especially blades or cassettes, allow for greater pin protection. X X Having the same connector on both ends of the trunk simplifies design and installation. It is not necessary to keep track of which end is which. These uniform practices also reduce the number of patchcord variants required, which simplifies MCs and reduced inventory. Keeping track of Path, Polarity and Pins greatly simplifies data center design and provides a framework for simplified, logical migration of data rates and changes in topologies. There is one additional rule that experience has taught is extremely helpful in structured design. Will Pug High Loss Won t Pug Damage? Will Plug Good Loss Rule 0: Once you have defined your practices and migration plan, diagram each variation and step in your migration and assemble and check a physical model of each to confirm your designs. This is not complicated. While active switch ports would be the ideal, this could be costly and very time-consuming. Physical circuits can be verified with a simple light source, and proper plugging can be inspected visually. This one-time verification can save countless hours of design, MC and troubleshooting time in the long run. SUMMRY gile design for a flexible data center takes planning, especially to support future migrations in topology and data rate. few simple rules can help. Rule : Have a plan, develop it carefully, and stick to it! Rule 2: Develop your plan using industry standards. Rule 3: If the standard has options, pick the option that works best for you and stay with it. Rule 4: Minimize the cabling polarity variants within your infrastructure, and make certain polarity choice does not limit options of the devices you can employ within the infrastructure. Rule 5: ll trunks or components that act as trunks should have uniform polarity. Rule 6: Once you have chosen your trunk polarity, choose uniform designs and components for 0G, highdensity 0G, 40G and 00G channels. e certain that migration to higher and lower data rates is possible. Rule 7: Channel designs should account for future uncertainties (e.g. eight or twenty fiber solutions for 00G channels). Rule 8: Have a uniform practice for MPO pins by data rate for patchcords. Rule 9: Have a uniform practice for MPO pins on trunks. Trunks should have the same connectors on both ends. Rule 0: Once you have defined your practices and migration plan, diagram each variation and step in your migration and assemble and check a physical model of each to confirm your designs. Disciplined planning is critical for robust designs. Simple rules for the Three P s help support this discipline both for initial designs as well as the long-term evolution and enhanced utility of the data center infrastructure. ENTERPRISE NETWORK SOLUTIONS /// WHITE PPER

te.com TE Connectivity, TE Connectivity (logo) and Every Connection Counts are trademarks. ll other logos, products and/or company names referred to herein might be trademarks of their respective owners. The information given herein, including drawings, illustrations and schematics which are intended for illustration purposes only, is believed to be reliable. However, TE Connectivity makes no warranties as to its accuracy or completeness and disclaims any liability in connection with its use. TE Connectivity s obligations shall only be as set forth in TE Connectivity s Standard Terms and Conditions of Sale for this product and in no case will TE Connectivity be liable for any incidental, indirect or consequential damages arising out of the sale, resale, use or misuse of the product. Users of TE Connectivity products should make their own evaluation to determine the suitability of each such product for the specific application. 204 TE Connectivity Ltd. family of companies ll Rights Reserved. 37376.2E /4 Revision WHITE PPER Contact us: Greensboro, NC US 27409-8420 Tel: -800-553-0938 Fax: -77-986-7406 www.te.com/enterprisenetworks ENTERPRISE NETWORK SOLUTIONS /// WHITE PPER