Datacenter Performance Dramatic Advances Driven by Innovative Power, Interface Connectivity Technologies We re seeing connectivity reaching everywhere and reaching further, with Internet traffic volume exploding and data-storage needs growing at hard-to-comprehend rates.
Datacenter Performance Dramatic Advances Driven by Innovative Power, Interface Connectivity Technologies We re seeing connectivity reaching everywhere and reaching further, with Internet traffic volume exploding and data-storage needs growing at hard-to-comprehend rates. The amount of physical infrastructure, connectivity and electrical power required to support this growth is astounding. This infrastructure often referred to as the data and computing centers divides into four general classes. Such a center can be designed for brief but high transaction rates, such as those that large retailers, search engine, and other companies require; for traditional IT applications, often referred to as enterprise computing; for hosting cloudbased applications, which need robust and consistent performance; or for single-purpose, extremely low-latency transactions, as demanded by stock traders. Regardless of the category, they all share two almost unquenchable thirsts: the need for data pathways with ever-higher speed and density, along with a need for more electrical power delivered throughout for the electronics (and the subsequent heat that must be removed). It s ironic and perhaps counter-intuitive that what enables the dramatic enhancement in the performance and capability of these structures depends on two very different aspects of technology, yet their futures and fortunes are closely linked. The data rate imperative Within the datacenter, gigabit links are being pushed to ever-higher rates. Internal data lanes operating at 10 Gbit/s are rapidly being superseded by lanes at 25 Gbit/s, 40 Gbit/s, and even 100 Gbit/s. There are single-lane approaches, as well as use of multiple higher-speed links routed in parallel for yet-higher aggregate speeds; for example, a run of 4 x 25 Gbit/s lanes yields 100 Gbit/s overall performance. Copper wire cable is the most obvious physical medium for many of these links, due to its moderate cost, ease of use, and historical momentum. But those factors are no longer enough to make it the backbone medium of choice in many cases, since it cannot achieve the rates and distance demanded. Instead, designers are increasingly turning to fiber-optic (FO) links, which easily support tens of Gbit/s data much more readily, and can go beyond what copper can offer. (Fiber is already the standard medium for long-distance external links, of course.) But there s a challenge in going to FO, as well: It is initially more expensive, installation is more challenging, and even making the necessary physical connections is more difficult. A general guideline is to use Page 2
copper where you can, but fiber if you have to and the have to part is increasingly dominating the tradeoff balance. There s a difficult tradeoff to be made with data links. As the data rate goes up, the reach or achievable distance goes down. The highest rates are only achievable with a shrinking distance, so any gain of speed may be outweighed by the need for repeaters, which extend the data-path distance, or by the requirement for a more tightly packed design. The speed/reach is sharper with copper than FO links, but for either one, the system architect faces a dilemma: Whether it is called a compromise or a tradeoff, there is no simple answer to it. Power issues also dominate From a technical and engineering standpoint, providing operating power to the datacenter and removing the associated heat that power supplies and components dissipate are very far removed from issues of higher data rates. Yet, power-related concerns can define system design and implementation as much as the need for speed. Power consumption (and thus dissipation) per rack has increased over the last decade by a factor of 10, according to reputable studies. This increase results in two closely linked problems: supplying the increased amount of electrical power, and dissipating the heat from both the power supply and the active electronics from a more crowded enclosure. Complicating the power-delivery issue is the reality of cabling and connectors. High-level conceptual schematics and block diagrams may show a few power-supply blocks and interconnects in the overall path from high-voltage source to final low-voltage rails, but the situation is actually quite different. Due to the power flow within the larger datacenter, the physical construction of the racks, and the connections to the boards within each rack, dozens of cables and associated connectors will be needed for unavoidable power-path transitions. At each connector, even a small amount of contact resistance and voltage drop (IR loss) results in heat dissipation (I 2 R), yet the rise at the connector must be kept to a modest amount (typically under 30 C) for safety and performance. Complicating the issue is the increased demand for connectors that support hot swapping of boards, where they are removed or plugged in with power on. This can lead to power-sequencing problems and contact degradation if the connectors are not appropriate or the contact materials are not robust. Technology innovation offers improved alternatives Meeting the ever-growing requirements of datacenters requires innovation at all levels, starting with basic contacts and copper cable construction. These factors are sometimes assumed to be mundane, although that is certainly not the case. A few examples make clear that no single improvement is the answer; instead, advances in aspects ranging from basic power and data connector configuration to advanced fiber interfaces are part of the solution: In connectors, the TE Connectivity Multi-Beam XL Power Distribution Connector System has become a de facto connector standard for modular, hot-swappable power-distribution systems, supporting AC and DC power in the same connector and pre-defined power and signal sequencing, critical for hot-swapping (Figure 1). It is based on a unique multi-cantilever-beam design for the FIGURE 1 Page 3
power contact, which offers a rating up to 43 amps per contact. It uses base metals with high conductivity for a contact resistance of only 0.7 milli-ohms even at end-of-life, and also has lower mating forces than alternative designs. The Multi-Beam XL Power Distribution Connector System is available in both board-to-board and cableto-board versions, each offered in a highly modular platform. This supports custom variations in the number of power and/or signal contacts that are available with little or no tooling cost, which is an added benefit to designers who need to tailor their signal and power configurations to an optimum. Standardized connectors such as these, which meet multiple, demanding, and often conflicting requirements, are a major boon to designers. In addition to their basic performance, their high volume of design-ins means they are fully characterized and understood, thus minimizing the surprises that often accompany use of leading-edge products. On the data flow side, designers need to route signals many ways: from each rack to the top of the rack switch, to the end of the row switch, from the front end of the row switch to the core switch, to an aggregation switch, to cite just a few of the many stages and transitions. But the large number of multiple pathways and parallel cables means that vital cooling airflow can be choked off by the bundles of copper cables. To overcome these limitations, TE developed the QSFP+ direct-attach copper-cable assemblies as a high-speed, cost-effective alternative to fiber optics for Ethernet, Fibre Channel, and InfiniBand technology applications (Figure 2). In addition to being available in standard 30, 28, and 26 AWG wire gauges, TE offers this technology in fine-gauge, 33 AWG, 8-pair cable assemblies. This satisfies the need for ultra-thin, lightweight, highly flexible cabling for use in high-density, intra-rack applications, as the thinner-wire assembly provides improved airflow channels. Just fabricating the assemblies with thinner wire was not a quick-and-easy solution, FIGURE 2 since doing so would normally degrade signal-integrity performance to unacceptable levels. But the technology used in the QSFP+ design does not make this compromise, and it supports 100 Gbit/s data rates with reach comparable to the heavier-gauge cables. As a result, designers have a relatively painless way of maintaining and even upgrading performance without a major redesign of their wiring topology and routing. FIGURE 3 In addition to cables and connectors, the passive backplane is another area where chokepoints can slow down the overall system performance. That s where the revolutionary design of the STRADA Whisper Backplane Interconnection System from TE is a major advance: It transfers data at 25 Gbit/s and offers scalability up to 40 Gbit/s, allowing designers to achieve efficient future system upgrades without costly backplane or mid-backplane redesigns (Figure 3). It features extremely low noise, low insertion loss, and little Page 4
to no skew, a result of signal contacts being arranged horizontally in high-speed differential pairs. This system simplifies and improves backplane board design, while maintaining signal integrity and saving board space. Each differential pair is individually shielded and surrounded by six ground connection points for excellent signal integrity and EMI performance. Insertion loss is less than 1 db and is linear up to 20 GHz. The benefit is that the backplane, which is easily overlooked as a limiting factor at the system level, is no longer a potential bottleneck. To simplify implementation, the STRADA Whisper system mechanically replaces most high-speed backplane connectors in the market. However, it differentiates itself by using folded signal pins surrounded by strong, protective C-shaped shields, which makes this system one of the most robust offerings in the market. For highest data rates and longest reach, fiber optic links are the only viable solution. Cables using these fibers are thinner than copper (thus less airflow obstruction), and they more easily support higher-capacity, high-density multilane connectivity over longer distances than copper. But optical fiber requires electro-optical interfaces, which are sophisticated, complex devices that add to the challenges of realizing the benefits of these links. To advance the capabilities of fiber links to operate at 25 Gbit/s and beyond, the Coolbit optical engine from TE meets high-density and high-bandwidth requirements, while requiring only about two-thirds the power of conventional solutions. This complete assembly includes a semiconductor VCSEL (vertical-cavity surface-emitting laser) diode, photodiode, transimpedance amplifier, and other ICs needed for complete electronic/optical and optical/electronic transceiver interfaces. The benefits of the Coolbit optical engine are apparent when multiple such units are combined to scale up data rates while density and reach are also increased, as in the QSFP28 4-channel transceiver capable of transmitting and receiving 100 Gbit/s simultaneously (4 x 25 Gbit/s) (Figures 4a and 4b). Taking the resultant assembly combinations further, a 12-channel mid-board optics (MBO) module transceiver is capable of transmitting and receiving at 300 Gbit/s; in turn, these are combined to enable terabit-rate systems, with horizontal linecards providing 28 terabit/s of input/output connected to onboard MBOs. When used with managed fiber solutions on FIGURE 4a FIGURE 4b Page 5
the linecards via optical flex circuits, designers can build an optical backplane capable of more than 900 Tbit/s of interchassis interconnectivity truly impressive. Conclusion The switches and routers within datacenters, along with their constituent ICs, usually get the bulk of the attention, and even credit, for the dramatic increase in performance of these vital facilities. However, they are only part of the complete story. Power connectors and cabling are required to distribute the huge amount of power needed, providing high contact performance, low temperature rise, low losses, and airflow paths; backplanes must support both power needs and multi-gbit/s data rates. When copper no longer has the speed, reach, and cost balance needed, electro-optical interconnects used on VCSEL-based engines can increase performance parameters by orders of magnitude. Innovative solutions in these areas are making the aggressive and growing demands on datacenters achievable. To do so requires components that address efficient power distribution and connectivity at one end of the technical spectrum, as well as Gbit/s data transfer at the other. These two worlds must complement each other to deliver a complete datacenter design that meets stringent performance, power, and cost goals. Products from TE Connectivity, such as the examples cited, play a large role in the designer s toolkit. te.com 2014 TE Connectivity Ltd. family of companies. All Rights Reserved. TE Connectivity, TE Connectivity (logo), Every Connection Counts are trademarks.