Datacenter Performance



Similar documents
Quick Reference Guide High Speed Input/Output Solutions

Metallized Particle Interconnect A simple solution for high-speed, high-bandwidth applications

Direct Attach Cable with Micro Coaxial Wire for Data Center

Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity

The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration

The ABC of Direct Attach Cables

Data Center Design for 40/100G

Network Design. Yiannos Mylonas

MICRO SFP+ CONNECTOR & CABLE ASSEMBLY

Fiber in the Data Center

How To Make A Data Center More Efficient

SummitStack in the Data Center

Table of Contents. Fiber Trunking 2. Copper Trunking 5. H-Series Enclosures 6. H-Series Mods/Adapter Panels 7. RSD Enclosures 8

The Need for Low-Loss Multifiber Connectivity

White Paper Solarflare High-Performance Computing (HPC) Applications

Cable management for rack-mounted systems

Implementation of Short Reach (SR) and Very Short Reach (VSR) data links using POET DOES (Digital Opto- electronic Switch)

Plug & Play Gigabit Ethernet Media Conversion Module. Application Note

Obsolete Fiber Technology? Not in my Data Center!

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Upgrading Path to High Speed Fiber Optic Networks

Optical Interconnect Technology for High-bandwidth Data Connection in Next-generation Servers

Data Center Market Trends

Considerations for choosing top-of-rack in today's fat-tree switch fabric configurations

Laser-Optimized Fiber

10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling

White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology

Smart Cabling: Constructing a cost effective reliable and upgradeable cable infrastructure for your data centre/enterprise network

Managing High-Density Fiber in the Data Center: Three Real-World Case Studies

Optimizing Infrastructure Support For Storage Area Networks

Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI. Article for InfoStor November 2003 Paul Griffith Adaptec, Inc.

The 50G Silicon Photonics Link

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Migration Strategy for 40G and 100G Ethernet over Multimode Fiber

MIGRATING TO A 40 GBPS DATA CENTER

Solutions for Commercial Aerospace

Trends In Data Rate And Link Length In Evolving Optical Standards

COPPER FLEX PRODUCTS

Solutions Guide. High Availability IPv6

The Economics of Gigabit 4G Mobile Backhaul

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

Design Guide for Photonic Architecture

SummitStack in the Data Center

Uncompromising Integrity. Making 100Gb/s deployments as easy as 10Gb/s

Cost-effective, extremely manageable, high-density rack servers September Why Blade Servers? by Mark T. Chapman IBM Server Group

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM CALIENT Technologies

Fiber Selection and Standards Guide for Premises Networks

Optical Fiber. Smart cabling: constructing a cost-effective, reliable and upgradeable cable infrastructure for your enterprise network

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

Data Sheet Fujitsu PRIMERGY BX400 S1 Blade Server

INCORPORATING WEIGHT-SAVING TECHNOLOGY IN AEROSPACE APPLICATIONS: AT WHAT RISK?

An Ethernet Cable Discharge Event (CDE) Test and Measurement System

Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches

AMP NETCONNECT Data Center Cabling Solutions

Server Consolidation and Remote Disaster Recovery: The Path to Lower TCO and Higher Reliability

Using a Fabric Extender with a Cisco Nexus 5000 Series Switch

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development

AMP NETCONNECT CABLING SYSTEMS FOR DATA CENTERS & STORAGE AREA NETWORKS (SANS) High-density, High Speed Optical Fiber and Copper Solutions

DATA CENTRES POWERED BY EXCEL

Zone Distribution in the Data Center

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

PCI Express* Ethernet Networking

Using Industrial Ethernet Switches to Assure Maximum Uptime White Paper

Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices

SCSI vs. Fibre Channel White Paper

Transmission of High-Speed Serial Signals Over Common Cable Media

Ixia Director TM. Powerful, All-in-One Smart Filtering with Ultra-High Port Density. Efficient Monitoring Access DATA SHEET

SIEMON Data Center Solutions.

Things You Must Know About Gigabit Ethernet 1. Understanding Gigabit Ethernet

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

OCTOBER Layer Zero, the Infrastructure Layer, and High-Performance Data Centers

Active Optical Cables for InfiniBand QDR and FDR. SC11 Seattle, WA

Cabling LANs and WANs

THE IMPACT OF CURRENT TRENDS ON THE DATA CENTER COMMUNICATIONS CABLING INFRASTRUCTURE

EonStor DS High-Density Storage: Key Design Features and Hybrid Connectivity Benefits

CISCO WDM SERIES OF CWDM PASSIVE DEVICES

Large Scale Clustering with Voltaire InfiniBand HyperScale Technology

Physical Infrastructure trends and certification requirements for Datacenters. Psiber s Global Village

NetScanner System. Toronto: , Montreal: , Toll Free: ,

Router Architectures

FullAXS CONNECTOR SEALING & CABLE ASSEMBLY

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Integrating Oracle's Exadata Database Machine with a Data Center LAN Using Oracle Ethernet Switch ES2-64 and ES2-72 ORACLE WHITE PAPER MARCH 2015

Mobile Device Power Monitor Battery Connection Quick Start Guide

USB 3.0* Radio Frequency Interference Impact on 2.4 GHz Wireless Devices

8 Gbps CMOS interface for parallel fiber-optic interconnects

Sustaining the Cloud with a Faster, Greener and Uptime-Optimized Data Center

High speed pattern streaming system based on AXIe s PCIe connectivity and synchronization mechanism

Cloud-Based Apps Drive the Need for Frequency-Flexible Clock Generators in Converged Data Center Networks

Untangled: Improve Efficiency with Modern Cable Choices. PRESENTATION TITLE GOES HERE Dennis Martin President, Demartek

HIGHSPEED ETHERNET THE NEED FOR SPEED. Jim Duran, Product Manager - Americas WHITE PAPER. Molex Premise Networks

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

A Gigabit Transceiver for Data Transmission in Future HEP Experiments and An overview of optoelectronics in HEP

Using Pre-Emphasis and Equalization with Stratix GX

Transcription:

Datacenter Performance Dramatic Advances Driven by Innovative Power, Interface Connectivity Technologies We re seeing connectivity reaching everywhere and reaching further, with Internet traffic volume exploding and data-storage needs growing at hard-to-comprehend rates.

Datacenter Performance Dramatic Advances Driven by Innovative Power, Interface Connectivity Technologies We re seeing connectivity reaching everywhere and reaching further, with Internet traffic volume exploding and data-storage needs growing at hard-to-comprehend rates. The amount of physical infrastructure, connectivity and electrical power required to support this growth is astounding. This infrastructure often referred to as the data and computing centers divides into four general classes. Such a center can be designed for brief but high transaction rates, such as those that large retailers, search engine, and other companies require; for traditional IT applications, often referred to as enterprise computing; for hosting cloudbased applications, which need robust and consistent performance; or for single-purpose, extremely low-latency transactions, as demanded by stock traders. Regardless of the category, they all share two almost unquenchable thirsts: the need for data pathways with ever-higher speed and density, along with a need for more electrical power delivered throughout for the electronics (and the subsequent heat that must be removed). It s ironic and perhaps counter-intuitive that what enables the dramatic enhancement in the performance and capability of these structures depends on two very different aspects of technology, yet their futures and fortunes are closely linked. The data rate imperative Within the datacenter, gigabit links are being pushed to ever-higher rates. Internal data lanes operating at 10 Gbit/s are rapidly being superseded by lanes at 25 Gbit/s, 40 Gbit/s, and even 100 Gbit/s. There are single-lane approaches, as well as use of multiple higher-speed links routed in parallel for yet-higher aggregate speeds; for example, a run of 4 x 25 Gbit/s lanes yields 100 Gbit/s overall performance. Copper wire cable is the most obvious physical medium for many of these links, due to its moderate cost, ease of use, and historical momentum. But those factors are no longer enough to make it the backbone medium of choice in many cases, since it cannot achieve the rates and distance demanded. Instead, designers are increasingly turning to fiber-optic (FO) links, which easily support tens of Gbit/s data much more readily, and can go beyond what copper can offer. (Fiber is already the standard medium for long-distance external links, of course.) But there s a challenge in going to FO, as well: It is initially more expensive, installation is more challenging, and even making the necessary physical connections is more difficult. A general guideline is to use Page 2

copper where you can, but fiber if you have to and the have to part is increasingly dominating the tradeoff balance. There s a difficult tradeoff to be made with data links. As the data rate goes up, the reach or achievable distance goes down. The highest rates are only achievable with a shrinking distance, so any gain of speed may be outweighed by the need for repeaters, which extend the data-path distance, or by the requirement for a more tightly packed design. The speed/reach is sharper with copper than FO links, but for either one, the system architect faces a dilemma: Whether it is called a compromise or a tradeoff, there is no simple answer to it. Power issues also dominate From a technical and engineering standpoint, providing operating power to the datacenter and removing the associated heat that power supplies and components dissipate are very far removed from issues of higher data rates. Yet, power-related concerns can define system design and implementation as much as the need for speed. Power consumption (and thus dissipation) per rack has increased over the last decade by a factor of 10, according to reputable studies. This increase results in two closely linked problems: supplying the increased amount of electrical power, and dissipating the heat from both the power supply and the active electronics from a more crowded enclosure. Complicating the power-delivery issue is the reality of cabling and connectors. High-level conceptual schematics and block diagrams may show a few power-supply blocks and interconnects in the overall path from high-voltage source to final low-voltage rails, but the situation is actually quite different. Due to the power flow within the larger datacenter, the physical construction of the racks, and the connections to the boards within each rack, dozens of cables and associated connectors will be needed for unavoidable power-path transitions. At each connector, even a small amount of contact resistance and voltage drop (IR loss) results in heat dissipation (I 2 R), yet the rise at the connector must be kept to a modest amount (typically under 30 C) for safety and performance. Complicating the issue is the increased demand for connectors that support hot swapping of boards, where they are removed or plugged in with power on. This can lead to power-sequencing problems and contact degradation if the connectors are not appropriate or the contact materials are not robust. Technology innovation offers improved alternatives Meeting the ever-growing requirements of datacenters requires innovation at all levels, starting with basic contacts and copper cable construction. These factors are sometimes assumed to be mundane, although that is certainly not the case. A few examples make clear that no single improvement is the answer; instead, advances in aspects ranging from basic power and data connector configuration to advanced fiber interfaces are part of the solution: In connectors, the TE Connectivity Multi-Beam XL Power Distribution Connector System has become a de facto connector standard for modular, hot-swappable power-distribution systems, supporting AC and DC power in the same connector and pre-defined power and signal sequencing, critical for hot-swapping (Figure 1). It is based on a unique multi-cantilever-beam design for the FIGURE 1 Page 3

power contact, which offers a rating up to 43 amps per contact. It uses base metals with high conductivity for a contact resistance of only 0.7 milli-ohms even at end-of-life, and also has lower mating forces than alternative designs. The Multi-Beam XL Power Distribution Connector System is available in both board-to-board and cableto-board versions, each offered in a highly modular platform. This supports custom variations in the number of power and/or signal contacts that are available with little or no tooling cost, which is an added benefit to designers who need to tailor their signal and power configurations to an optimum. Standardized connectors such as these, which meet multiple, demanding, and often conflicting requirements, are a major boon to designers. In addition to their basic performance, their high volume of design-ins means they are fully characterized and understood, thus minimizing the surprises that often accompany use of leading-edge products. On the data flow side, designers need to route signals many ways: from each rack to the top of the rack switch, to the end of the row switch, from the front end of the row switch to the core switch, to an aggregation switch, to cite just a few of the many stages and transitions. But the large number of multiple pathways and parallel cables means that vital cooling airflow can be choked off by the bundles of copper cables. To overcome these limitations, TE developed the QSFP+ direct-attach copper-cable assemblies as a high-speed, cost-effective alternative to fiber optics for Ethernet, Fibre Channel, and InfiniBand technology applications (Figure 2). In addition to being available in standard 30, 28, and 26 AWG wire gauges, TE offers this technology in fine-gauge, 33 AWG, 8-pair cable assemblies. This satisfies the need for ultra-thin, lightweight, highly flexible cabling for use in high-density, intra-rack applications, as the thinner-wire assembly provides improved airflow channels. Just fabricating the assemblies with thinner wire was not a quick-and-easy solution, FIGURE 2 since doing so would normally degrade signal-integrity performance to unacceptable levels. But the technology used in the QSFP+ design does not make this compromise, and it supports 100 Gbit/s data rates with reach comparable to the heavier-gauge cables. As a result, designers have a relatively painless way of maintaining and even upgrading performance without a major redesign of their wiring topology and routing. FIGURE 3 In addition to cables and connectors, the passive backplane is another area where chokepoints can slow down the overall system performance. That s where the revolutionary design of the STRADA Whisper Backplane Interconnection System from TE is a major advance: It transfers data at 25 Gbit/s and offers scalability up to 40 Gbit/s, allowing designers to achieve efficient future system upgrades without costly backplane or mid-backplane redesigns (Figure 3). It features extremely low noise, low insertion loss, and little Page 4

to no skew, a result of signal contacts being arranged horizontally in high-speed differential pairs. This system simplifies and improves backplane board design, while maintaining signal integrity and saving board space. Each differential pair is individually shielded and surrounded by six ground connection points for excellent signal integrity and EMI performance. Insertion loss is less than 1 db and is linear up to 20 GHz. The benefit is that the backplane, which is easily overlooked as a limiting factor at the system level, is no longer a potential bottleneck. To simplify implementation, the STRADA Whisper system mechanically replaces most high-speed backplane connectors in the market. However, it differentiates itself by using folded signal pins surrounded by strong, protective C-shaped shields, which makes this system one of the most robust offerings in the market. For highest data rates and longest reach, fiber optic links are the only viable solution. Cables using these fibers are thinner than copper (thus less airflow obstruction), and they more easily support higher-capacity, high-density multilane connectivity over longer distances than copper. But optical fiber requires electro-optical interfaces, which are sophisticated, complex devices that add to the challenges of realizing the benefits of these links. To advance the capabilities of fiber links to operate at 25 Gbit/s and beyond, the Coolbit optical engine from TE meets high-density and high-bandwidth requirements, while requiring only about two-thirds the power of conventional solutions. This complete assembly includes a semiconductor VCSEL (vertical-cavity surface-emitting laser) diode, photodiode, transimpedance amplifier, and other ICs needed for complete electronic/optical and optical/electronic transceiver interfaces. The benefits of the Coolbit optical engine are apparent when multiple such units are combined to scale up data rates while density and reach are also increased, as in the QSFP28 4-channel transceiver capable of transmitting and receiving 100 Gbit/s simultaneously (4 x 25 Gbit/s) (Figures 4a and 4b). Taking the resultant assembly combinations further, a 12-channel mid-board optics (MBO) module transceiver is capable of transmitting and receiving at 300 Gbit/s; in turn, these are combined to enable terabit-rate systems, with horizontal linecards providing 28 terabit/s of input/output connected to onboard MBOs. When used with managed fiber solutions on FIGURE 4a FIGURE 4b Page 5

the linecards via optical flex circuits, designers can build an optical backplane capable of more than 900 Tbit/s of interchassis interconnectivity truly impressive. Conclusion The switches and routers within datacenters, along with their constituent ICs, usually get the bulk of the attention, and even credit, for the dramatic increase in performance of these vital facilities. However, they are only part of the complete story. Power connectors and cabling are required to distribute the huge amount of power needed, providing high contact performance, low temperature rise, low losses, and airflow paths; backplanes must support both power needs and multi-gbit/s data rates. When copper no longer has the speed, reach, and cost balance needed, electro-optical interconnects used on VCSEL-based engines can increase performance parameters by orders of magnitude. Innovative solutions in these areas are making the aggressive and growing demands on datacenters achievable. To do so requires components that address efficient power distribution and connectivity at one end of the technical spectrum, as well as Gbit/s data transfer at the other. These two worlds must complement each other to deliver a complete datacenter design that meets stringent performance, power, and cost goals. Products from TE Connectivity, such as the examples cited, play a large role in the designer s toolkit. te.com 2014 TE Connectivity Ltd. family of companies. All Rights Reserved. TE Connectivity, TE Connectivity (logo), Every Connection Counts are trademarks.