Optical Interconnect Technology for High-bandwidth Data Connection in Next-generation Servers



Similar documents
How To Make A Data Center More Efficient

Security & Surveillance Cabling Systems

Nippon Telegraph and Telephone Corporation NEC Corporation Fujitsu Limited September 4, 2014

Quick Reference Guide High Speed Input/Output Solutions

New System Architecture for Next- Generation Green Data Centers: Mangrove

Cloud-Based Apps Drive the Need for Frequency-Flexible Clock Generators in Converged Data Center Networks

Enabling 21 st Century Connectivity

Data Center Design for 40/100G

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity

Billions of arithmetic operations per second what s their value without cabling?

Fiber Coupler Overview

Plug & Play Gigabit Ethernet Media Conversion Module. Application Note

Migration Strategy for 40G and 100G Ethernet over Multimode Fiber

The migration path from 10G to 40G and beyond for InstaPATCH 360, InstaPATCH Plus and ReadyPATCH installations

8 Gbps CMOS interface for parallel fiber-optic interconnects

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

Advanced Modulation Formats in Data Centre Communications Michael J. Wale Director Active Products Research

Photonic Interconnect Technologies for Data Centre Environments Richard Pitwon Seagate

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

High-Performance Submarine Line Terminal Equipment for Next-Generation Optical Submarine Cable System: FLASHWAVE S650

10 Gigabit Ethernet Switch Blade for Large-Scale Blade Servers

Optimizing Infrastructure Support For Storage Area Networks

Construction of High-speed and High-reliability Optical Networks for Social Infrastructure

How To Make A Base Transceiver Station More Powerful

Cabling & Test Considerations for 10 Gigabit Ethernet LAN

Low loss fiber to chip connection system for telecommunication devices

The 50G Silicon Photonics Link

Metallized Particle Interconnect A simple solution for high-speed, high-bandwidth applications

Server Platform Optimized for Data Centers

Corning Cable Systems Optical Cabling Solutions for Brocade

Data Sheet Fujitsu PRIMERGY BX400 S1 Blade Server

Table of Contents. Fiber Trunking 2. Copper Trunking 5. H-Series Enclosures 6. H-Series Mods/Adapter Panels 7. RSD Enclosures 8

Power Feeding Equipment for Optical Submarine Cable Systems

Sustaining the Cloud with a Faster, Greener and Uptime-Optimized Data Center

Integrate 10 Gb, 40 Gb and 100 Gb Equipment More Efficiently While Future-Proofing Your Fiber and Copper Network Infrastructure

The Optical Fiber Ribbon Solution for the 10G to 40/100G Migration

Corning Cable Systems Optical Cabling Solutions for Brocade

Optical Fiber. Smart cabling: constructing a cost-effective, reliable and upgradeable cable infrastructure for your enterprise network

Large-Capacity Optical Transmission Technologies Supporting the Optical Submarine Cable System

Four Wave Mixing in Closely Spaced DWDM Optical Channels

High-Speed Thin Client Technology for Mobile Environment: Mobile RVEC

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Power Noise Analysis of Large-Scale Printed Circuit Boards

Overcoming OM3 Performance Challenges

Managing High-Density Fiber in the Data Center: Three Real-World Case Studies

40-Gb/s Dense Wavelength Division Multiplexing Transmission System

Direct Attach Cable with Micro Coaxial Wire for Data Center

Semiconductor Device Technology for Implementing System Solutions: Memory Modules

Infrastructure for the next three generations

HIGHSPEED ETHERNET THE NEED FOR SPEED. Jim Duran, Product Manager - Americas WHITE PAPER. Molex Premise Networks

Technological Initiatives for Water-resistant and Thin Smartphones

Laser-Optimized Fiber

8-port 10/100Base-TX +2-port 100Base-FX Switch. User s Guide

Data Transfer Technology to Enable Communication between Displays and Smart Devices

Design Guide for Photonic Architecture

Operating System for the K computer

White Paper Solarflare High-Performance Computing (HPC) Applications

CHIP-PKG-PCB Co-Design Methodology

How To Build A Cisco Ukcsob420 M3 Blade Server

The ABC of Direct Attach Cables

High Availability Server Supports Dependable Infrastructures

Implementation of Short Reach (SR) and Very Short Reach (VSR) data links using POET DOES (Digital Opto- electronic Switch)

LONGLINE QSFP+ SR4. Features. Applications. Description. Page 1 of 13

PCI Express IO Virtualization Overview

ethernet alliance Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Version 1.0 August 2007 Authors:

Structural and Thermal Fluid Simulation and CAD System Linking

The Need for Low-Loss Multifiber Connectivity

Air conditioning management system

Server Virtualization Technology and Its Latest Trends

Obsolete Fiber Technology? Not in my Data Center!

PCI Express* Ethernet Networking

Low Cost 100GbE Data Center Interconnect

Development of Ultra-Multilayer. printed circuit board

Mechanical Design Platform on Engineering Cloud

Fiber Systems. Best Practices for Ensuring Polarity of Array-Based Fiber Optic Channels

Industrial Multi-port Serial Cards

Network Infrastructure Technology Supporting Parallelization and Multiplexing of Services

October 1, (Press release) Nippon Telegraph and Telephone Corporation

PCI Express Impact on Storage Architectures. Ron Emerick, Sun Microsystems

New Wiring Technology for Cost-effective Construction of FTTH Networks Using Free Branch Cable

Storage Area Network

A Platform Built for Server Virtualization: Cisco Unified Computing System

SPARC64 VIIIfx: CPU for the K computer

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Fiber Selection and Standards Guide for Premises Networks

Upgrading Path to High Speed Fiber Optic Networks

The 8-Fiber Solution for Today s Data Center

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

SuperIOr Controller. Digital Dynamics, Inc., 2014 All Rights Reserved. Patent Pending. Rev:

CISCO 10GBASE X2 MODULES

Network Technology Supporting an Intelligent Society: WisReed

MIGRATING TO A 40 GBPS DATA CENTER

WDM-PON: A VIABLE ALTERNATIVE FOR NEXT GENERATION FTTP

How To Build A Cisco Uniden Computing System

Resolution of comments 242 and 267 on Insertion loss measurements of installed fiber cables. Steve Swanson May 5, 2009

Cisco SFS 7000D Series InfiniBand Server Switches

Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices

Fiber Optics: Engineering from Global to Nanometer Dimensions

CWDM: lower cost for more capacity in the short-haul

Transcription:

Optical Interconnect Technology for High-bandwidth Data Connection in Next-generation Servers Tsuyoshi Yamamoto Kazuhiro Tanaka Satoshi Ide Tsuyoshi Aoki In the near future, an improvement in the performance and integration of servers (for example blade servers) will cause an increase in the amount of data to be transferred in places where a data connection is required. Furthermore, the data rate of transmission is growing to 25 Gb/s. These will increase the interconnect bandwidth in servers to around the 10 Tb/s order. We propose optical interconnect that is a data connection using optical fibers so as to avoid a limitation of bandwidth caused by a degradation of waveform and interference in copper wiring. Optical interconnects can be used to transmit data signal at high speed and they excel in terms of wiring density and transmission distance. To apply such optical interconnects to servers, many optical channels must be integrated in the limited space of a server chassis. To do so requires low-cost and compact optical components. In this paper, we propose optical technology (an optical transceiver module, optical connector and optical mid-plane) that makes it possible to have low-cost and high-density optical interconnects, and we describe a verification of their practical use in trial manufacturing. 1. Introduction Along with the recent performance enhancement of CPUs, the data processing capacity of servers has been significantly improving. In addition, integration of servers such as blade servers and expansion of virtualization technology that integrates multiple processes in one CPU are causing an increase in the amount of data connection within the chassis. Furthermore, the progress of transmission technology has been increasing the data rate of transmission signals such as Ethernet and InfiniBand from the conventional 10 Gb/s to 25 and 40 Gb/s. All these are expected to increase the volume of data connection in a server chassis at an accelerated pace in the future. We estimate that a Tb/s order of signal bandwidth will be required for interconnects in one server and tens of Tb/s will be needed in total. Copper wiring has been conventionally used for data connection in servers and a limitation to the data rate and density of signals due to degradation of waveforms and interference has hindered expansion of interconnect capacity. As a solution to this issue, we are studying the use of optical fiber transmission. It has excellent features of high speed, high capacity and long transmission distance and allows high-density transmission of signals at high data rate of 25 to 40 Gb/s. The technology, which has been in use for a few decades in fields including long-haul communications and access networks, is now beginning to be used for connection between server racks. To apply such fibers to intra-server connection, numerous optical channels must be integrated in the limited space of a server chassis. To do so requires technologies to allow various optical components to be fitted into a compact space and to manufacture them at low cost. We have preliminarily developed technologies that will be required for future servers, and they are described in this paper. The optical connector and optical mid-plane technologies presented in this paper are products of joint research with Furukawa Electric Co., Ltd. 2. Optical interconnect technology for servers To study the configuration and specification of an optical interconnect, we first assumed that the configuration of data connection in a blade server would be migrated to an optical connection. A blade server FUJITSU Sci. Tech. J., Vol. 50, No. 1, pp. 117 122 (January 2014) 117

integrates many thin server units in one chassis, and they are connected with high-speed signals. With existing blade servers, units are interconnected via an electrical mid-plane (printed circuit board). Large quantities of signals are exchanged via a mid-plane and high-capacity transmission is required. We streamlined information including the bandwidth of CPU IO signals and signal connection destinations to clarify the configuration (Figure 1), specification and requirements for the respective optical components of an intra-server optical interconnect. An existing high-speed signal interface of a blade server uses a matrix connection of 10-Gb/s IO signals (Ethernet) from the CPU unit via a switch unit and we studied a configuration with the speed of the signals increased to 25 Gb/s. In addition, we also considered a configuration in which PCI Express signals of a server unit for higher flexibility of external connection are connected with an external IO via a PCI switch and another configuration with a large memory unit prepared for connection with the CPU unit as required. For the present study, we estimated that using an optical fiber for this connection would result in about 100 lanes of optical signal IO and a bandwidth of 1.3 Tb/s from one server unit and about 2000 fibers connecting between server units. 1) Replacing copper wiring with optical wiring requires compact optical transceivers for converting high-speed, high-density electrical signals output from a CPU or LSI into optical signals, a high-bandwidth midplane for wiring the output optical signals with another CPU unit, and optical connectors that connect between them. We have developed these elemental technologies based on the specification above. To apply the Network switch blade Memory blade PCI switch blade Optical transceiver Server blade Optical mid-plane technology conventionally used in optical communication networks for telecom use to intra-server optical wiring, a significant cost reduction and density increase are required. 3. Optical transceiver technology Existing optical transceivers have transmission rates of 10 to 14 Gb/s for 10GbE and InfiniBand Fourteen Data Rate (FDR), which means that, in order to achieve the transmission rate of 25 Gb/s that is hoped for 100GbE and InfiniBand Enhanced Data Rate (EDR) for the next generation, the speed of the optical transceivers themselves needs to be increased. For transmission at 25 Gb/s, it is essential not only to increase the speeds of the optical devices and drive circuit but also to reduce the electrical wiring distances from CPUs to optical transceivers in order to avoid any degradation in the quality of electrical signals due to a wiring loss or inter-wiring interference. This meant we needed to develop technology to significantly reduce the size of the transceiver. To resolve these issues and reduce cost, we have developed an opto-electric converter and an optical transceiver that integrates the unit. 1) High-speed circuitry technology Using inexpensive optical devices is unavoidable for cost-reduction purposes, but their response performance is insufficient. To address this problem, we developed a driver circuit with a function to make rise and fall time steeper 2) and suppress influence of multiple reflections that degrade the waveforms of electrical signals, 3) thereby realizing a speed increase. This has increased the transmission rate per channel from the current rate of between 10 and 14 Gb/s to 25 Gb/s. 2) Compact optical coupling technology To improve optical coupling between opto-electric converters and an optical fiber, an optical coupling unit composed of a lens is currently used, and this poses problems because of the large size of the lens and its high cost. We have sought to reduce the size of the optical transceiver by mounting optical devices and ICs on a flexible printed circuit (FPC) board to construct an opto-electric converter. In addition, we have developed a cost-effective film-type lens sheet and stacked it on the underside of the FPC, and this solved the problems. 4) 3) Compact optical transceiver technology We have successfully developed a compact optical Figure 1 Configuration of intra-server optical interconnect. Figure 1 Configuration of intra-server optical interconnect. 118 FUJITSU Sci. Tech. J., Vol. 50, No. 1 (January 2014)

transceiver that uses a card-edge-type structure. A sub board equipped with the above-mentioned FPC-type opto-electric converter on either side was inserted into an edge-type connector on the mother board vertically. We carried out a trial manufacture of an optoelectric converter with 4 channels 25 Gb/s using this technology [Figure 2 (a)]. It measures 22 9 0.86 mm (including the mounted electronic components and optical waveguide). The thickness has been reduced as compared with conventional lens type of opto-electric converters to less than one-tenth in terms of the lens and less than one-third in terms of the opto-electric converter. We have taken advantage of this thinness to develop a compact optical transceiver with the experimentally manufactured opto-electric converter on either side [Figure 2 (b)]. It is equipped with eight channels of transmitter and receiver respectively and has achieved a compact size, or 47.8 16 21.6 mm, which has allowed it to be mounted with a small footprint near devices such as a CPU on a mother board. 4) In all of the eight channels measured, a good transmission waveform and error-free operation at 25 Gb/s have been confirmed. 5) 4. Optical connector technology Optical interconnects in a blade server require detachable optical connectors to connect between the respective blades and the mid-plane to allow a flexible server configuration and easy maintenance. Requirements of optical connectors for the present application include low loss, high density and low cost. Cost, in particular, is the highest priority issue for realizing optical interconnects. Standard multi-fiber optical connectors for telecommunication use are difficult to apply to intra-server use from the perspective of cost and one-tenth the conventional cost must be pursued. To resolve the issues above, we have taken an approach to reduce the process cost by reducing the polishing processes, which account for a major portion of such cost, and expanding the number of fibers of a multi-fiber optical connector. As a multi-fiber optical connector, we developed a new optical connector with compact quadruple housing based on standard multifiber mechanical transfer (MT) ferrule. 6) That makes it possible to connect 96 fibers simultaneously. 7) Figure 3 shows a structure and photo of the developed optical 22 mm CPU blade 9 mm 22 mm Optical waveguide (70 mm) Thickness: 0.86 mm Opto-electric converter on flexible printed circuit board (a) Opto-electric converter with lens sheet and optical waveguide stacked on it Optical connector 14 mm Ferrule Mid-plane (a) Structure Opto-electric converter 1 Optical waveguide 1 Opto-electric converter 2 47.8 mm Electrical connector Optical waveguide 2 Optical connector 16 mm 21.6 mm Card edge connector (b) Developed compact optical transceiver Ferrule (b) Appearance igure 2 Figure 2 Compact optical transceiver for optical interconnect in Figure 3 ompact optical blade transceiver server. Figure for optical interconnect in blade server. Developed 3 optical connector. Schematic diagram of developed optical connector. FUJITSU Sci. Tech. J., Vol. 50, No. 1 (January 2014) 119

connector. The quadruple housing consists of a 2 2 array accommodating four ferrules. It can reduce the footprint per ferrule to about one-third of a configuration using a housing with standard multiple-fiber push-on (MPO) connector. 8) Lower cost has been achieved by almost halving the number of mechanical components intended for joining the male and female structures of an optical connector in spite of the increased density. By using 24-core MT ferrules to accurately mount the end of the cut fiber sheet on the mid-plane, the end polishing process, which is a main component of manufacturing cost, has been omitted. This has successfully reduced the connector cost while satisfying the tolerance of connection loss. 5. Optical mid-plane technology For the optical interconnects discussed in this paper, optical wiring of 1500 to 2000 fibers to an optical mid-plane is required. To that end, it is necessary to have layout technology allowing efficient wiring of many optical fibers, integrating technology for optical fibers with high density and optical connector technology for a large number of connections between server blades equipped with optical transceivers and the optical mid-plane simultaneously. For the optical fiber layout, we have selected paths with the minimum wiring lengths according to the wiring to achieve an optimum fiber layout. Optical fiber featuring a small diameter and high bending durability 6) has been used for housing optical fibers with high density to successfully increase the wiring capacity from a few hundred fibers of an ordinary blade server to 2000 fibers. Eight times the connection density compared with the conventional high-speed electrical connectors was achieved by using the above-mentioned high-density, multi-lane optical connector technology. The connector housing has a floating mechanism on the mid-plane and this allows simultaneous connection with an electrical connector and optical connector when the blade is inserted or removed. By providing this structure, the optical fiber portion can be easily connected, leading to improved work efficiency. 6. Server verification using optical connection We experimentally manufactured the respective elemental devices by using developed technologies, and combined them to build a pseudo-server for verification of the characteristics. The produced optical mid-plane is shown in Figure 4 (a) and the pseudoserver that uses the mid-plane is shown in Figure 4 (b). The capability of compact housing of about 2000 optical fibers has been confirmed. The loss with the developed optical connector is 0.29 db on average and 0.79 db at the maximum, which achieves the target and has been confirmed to satisfy the loss budget. In addition, simultaneous insertion and removal of the electrical and optical connectors at the time as blade insertion and removal has been confirmed to be possible. For verification of high-speed signal transmission, we connected output optical signals from a pseudo-server blade equipped with a 25 Gb/s optical transceiver through the optical mid-plane to the optical transceiver module on another system board and confirmed error-free transmission [Figure 4 (c)]. The performance has been validated to reach 50 Tb/s in total by optical transmission at 25 Gb/s per fiber and with a housing of 2000 fibers. This technology makes it possible to realize an intra-server optical interconnect accommodating 2000 optical fibers in compact housing and offers a bandwidth of 50 Tb/s, which is ten times that of the conventional electrical wiring. In this way, data connection in next-generation servers can be made faster. 7. Conclusion This paper has described optical interconnect technology for increasing the bandwidth of data connection in next-generation servers. We have proposed an optical transceiver, optical connector and optical mid-plane technologies based on cost reduction and wiring density increase and demonstrated the effect by trial manufacturing. In the future, we intend to commercialize the accomplished development and work on improving the reliability and manufacturability. References 1) J. Matsui et al.: High Bandwidth Optical Interconnection for Densely Integrated Server. OFC 2013 OW4A.4. 2) Y. Tsunoda, et al.: 25-Gb/s Transmitter for Optical Interconnection with 10-Gb/s VCSEL Using Dual Peak- Tunable Pre-Emphasis. Proc. OFC/NFOEC2011, OThZ2, 2011. 120 FUJITSU Sci. Tech. J., Vol. 50, No. 1 (January 2014)

Electrical wiring board Optical connector Optical fiber sheet CPU blade (a) Optical mid-plane (b) Pseudo-server (c) 25 Gb/s reception waveform Figure 4 Result of trial manufacture of server using optical connection. Result of trial manufacture of server using optical connection. 3) M. Sugawara et al.: Novel VCSEL driving technique with virtual back termination for high-speed optical interconnection. Proc. Photonics West, paper 8267-37, 2012. 4) T. Shiraishi et al.: Cost-effective Low-loss Flexible Optical Engine with Microlens-imprinted Film for Highspeed On-board Optical Interconnection. Proc. 62nd ECTC, pp. 1505 1510, 2012. 5) T. Yagisawa et al.: 200-Gb/s Compact Card-edge Optical Transceiver Utilizing Cost-effective FPC-based Module for Optical Interconnect. Tech. Digest of ECOC2012, We.1.E.3, 2012. 6) IEC 61754-5: Fibre optic connector interface - Part 5: Type MT connector family. 7) M. Iwaya et al.: Development of Optical Wiring Technology for Optical Interconnects. Furukawa Review, Vol. 129, pp. 1 4 (2012) (in Japanese). 8) IEC 61754-7: Fibre optic connector interfaces - Part 7: Type MPO connector family. FUJITSU Sci. Tech. J., Vol. 50, No. 1 (January 2014) 121

Tsuyoshi Yamamoto Mr. Yamamoto is currently engaged in development of optical interconnect systems. Satoshi Ide Mr. Ide is currently engaged in development of high-speed drive circuits for optical transceiver modules. Kazuhiro Tanaka Mr. Tanaka is currently engaged in development of mounting technology for optical transceiver modules. Tsuyoshi Aoki Mr. Aoki is currently engaged in development of optical interconnection mounting technology. 122 FUJITSU Sci. Tech. J., Vol. 50, No. 1 (January 2014)