HSSG copper objectives and five criteria Prepared by: Participants of the 100 GbE copper interest list Presenter: Chris Di Minico 1
100 GbE copper interest list Michael J. Bennett, LBLnet Services Group Yakov Belopolsky, Bel Stewart Connector Ed Cady, Meritec Larry Cohen, Independent Suveer Dhamejani, Tyco Electronics Chris DiMinico, MC Communications Alan Flatman, Independent Henning Hansen, Leoni High Speed Cables Sanjay Kasturia, Teranetics Greg McSorley, Amphenol Interconnect Products Ron Nordin, Panduit Corporation Joe O brien, Efficere Technologies Gourgen Oganessyan, Molex Incorporated Joe Pein, Honda Connector Petre Popescu, Astar Bob Thornton, Fujitsu Components America, Inc. Herb Van Deusen, W.L. Gore George Zimmerman, Solarflare Communications 2
Supporters: Schelto Vandoorn, Intel Ali Ghiasi, Broadcom 3
Presentation objectives and non-objective Identify HSSG copper objectives Expand development of five criteria Demonstrate multi-vendor support Non objective choosing a solution 4
HSSG copper objectives Support full-duplex operation only. Preserve the 802.3/Ethernet frame format at the MAC Client service interface. Preserve minimum and maximum FrameSize of current 802.3 Std. Support a speed of 100 Gb/s at the MAC/PLS service interface. Support at least 10 m over copper Support a BER better than or equal to 10-12 at the MAC/PLS service interface. 5
Presentations Contributors Participants of the 100 GbE copper interest list Market Requirements and Potential Presentation 1. Mike Bennett - The need for a low-cost 100GbE inter-rack copper interconnect Technical Feasibility and Economic Feasibility Presentations 2. Chris DiMinico, MC Communications; George Zimmerman, Solarflare Communications Twinaxial cable assembly transmission characteristics 3. Herb Van Deusen W.L. Gore - High Speed Cabling Guidelines 4. Carl Booth Amphenol Spectra-Strip, 24 AWG twinaxial cable structure for 25Gb applications 5. Gourgen Oganessyan, Jim McGrath Molex 100G Copper Proposal: Technical Feasibility 6. Will Miller - Efficere Technologies 100G Ethernet Test Fixture 6
Overview 100 GbE 100GBASE-CXn link 100 GbE lane rates (for discussion) 10 Gb/s copper interconnect market 100 Gb/s copper interconnect applications 7
100GBASE-CXn link MDI MDI Signal<p> Lane n 100GBASE -CXn Transmit Function Signal<n> Signal shield 100GBASE -CXn Receive Function Link shield Cable assembly 8
100 GbE Lane rate(s) for discussion 100 GbE lane rate Gb/s Length (m) Passive cable cable available connector technology available parallel 10 x 10 up to > 10 (TBD) Yes Yes parallel 5 x 20 up to >10 (TBD) Yes Yes parallel 4 x 25 up to >10 (TBD) Yes Yes Please note these rates are offered for discussion based on input from the call for interest list group. They are not to be considered as recommendations to the HSSG. 9
10 Gb/s copper interconnect market 10
10 Gb/s High Speed Copper Interconnect InfiniBand Protocol parallel Ethernet Protocol 10GBASE-CX4 (parallel) Fibre Channel Protocol parallel PCI Express parallel Signaling (Gbd) 4 x 2.5 Signaling (Gbd) 4 x 3.125 Signaling (Gbd) 4 x 3.1875 Signaling (Gbd) 4 x 2.5 Length (m) (loss based) 10 Specified length (m) up to 15 Length (m) 10 length (m) (loss based) 7 11
4x Connector ports - FY2007 4x Connector Ports (10 Gb/s) - FY2007 50,000 250,000 InfiniBand CX4 Other 1,000,000 Source: members HSSG copper interest group list 12
100 Gb/s copper interconnect applications 13
High Speed Copper Interconnect: Applications Intra/Inter rack/cabinet applications and High Performance Computing Interconnect >10 meter (TBD) interconnect 1.5 m 2.4 m max TIA-942 - Cabinet and rack height The maximum rack and cabinet height shall be 2.4 m (8 ft). Preferably no taller than 2.1 m (7 ft) for easier access to the equipment or connecting hardware installed at the top. 14
Conclusions Technical feasibility, economic feasibility, and market potential for a 100 Gb/s copper interconnect will be demonstrated. Up to 10 meter (TBD) reach consistent with intra/inter rack application and HPC cluster distances. High speed study group should add high speed copper interconnect objective to address intra/inter rack applications and high performance computing (HPC) interconnects. 15
BACKUP SLIDES 16
Market Potential Broad set( s) of applications Multiple vendors, multiple users Balanced cost, LAN Vs. attached stations The sweet spot for HPC is closer to 10 m; 5 meters probably won't work well in our high performance computing (HPC) environment since the physical dimensions of the cluster and storage systems is large. Computer room where large switches need consolidating connections to the end-systems (large clusters, storage). These switches are close enough to the end-systems to use copper or MM fiber whenever possible. Low density Inter-router/switch connections within a rack or two. We're talking ten ports max. Interconnect for HPC clusters requiring very high reliability. 17
Market Potential Broad set( s) of applications Multiple vendors, multiple users Balanced cost, LAN vs. attached stations Applications listed below largely based on questionnaire responses designed to solicit end-user input on the market potential for 100 GbE over copper. Respondents: Pacific Northwest National Laboratory, ESnet Network Engineering Group, Lawrence Berkeley National Laboratory; Electronic Systems Engineering, Computing Division, Fermilab US DOE; Lawrence Livermore Nat'l Lab (LLNL) Right now there is a need to move and process/analyze 100+ Petabyte data sets between supercomputer and storage nodes. Datacenter or computer room, 100-GbE copper to a 100GbE copper+fiber router/switch. Low-cost interconnect between networking equipment (e.g., routers, switches, etc) in a telecom POP or at the network edge for peering or customer handoff. There is a need for 100GE customer handoff and peering within 5 years. 18
Analysis: Copper Interconnect S-parameters Tx 8 pair cable 28 AWG 10 m 4-transmit and 4-receive R2 4 near-end crosstalk Disturbers multi-disturber NEXT Rx + Insertion loss, Tx Rx 3 far-end crosstalk disturbers multi-disturber FEXT Tx 19
Interconnect Transmisson Characeristics 0 20 40 MHz 0 5000 10000 15000 20000 Insertion Loss - 5 m Insertion Loss - 10 m PSNEXT-4 disturber-10m-assembly PSFEXT-3 disturber-10m assembly db 60 80 100 PSFEXT-3 disturber-5 m assembly Noise 5 m cable+connector model Noise 10 m cable+connector model PS Total disturber Noise-5m PS Total disturber Noise-10m 120 140 20
50000.0 45000.0 40000.0 Lane Rate vs. 1/2 PAM symbol rate 6 db Margin, 5m cable + connectors Uncoded bit rate per lane 2 db code bit rate per lane 4 db code bit rate per lane Uncoded Info bits/baud/dim 2dB code Info bits/baud/dim 4dB code Info bits/baud/dim 6.0 5.0 Lane data rate (Mbps/lane) 35000.0 30000.0 25000.0 20000.0 15000.0 4.0 3.0 2.0 10000.0 5000.0 1.0 0.0 0.0 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 22000 Used Bandwidth (MHz) = 1/2 PAM symbol rate (Mbaud) 21
Lane Rate vs. 1/2 PAM symbol rate 6 db Margin, 10m cable+connectors 20000.0 Uncoded bit rate per lane 2 db code bit rate per lane 4 db code bit rate per lane Uncoded Info bits/baud/dim 4.5 18000.0 2dB code Info bits/baud/dim 4dB code Info bits/baud/dim 4.0 16000.0 3.5 Lane data rate (Mbps/lane) 14000.0 12000.0 10000.0 8000.0 6000.0 3.0 2.5 2.0 1.5 Bits/baud/dimension = (log_2(pam levels)) 4000.0 1.0 2000.0 0.5 0.0 0.0 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 22000 Used Bandwidth (MHz) = 1/2 PAM symbol rate (Mbaud) 22
Cable Salz SNR - 0 folds 50.00 45.00 40.00 10m cable + 50 db noise +2 connectors 5m cable + 50 db noise+ 2 connectors 35.00 30.00 db SNR 25.00 20.00 15.00 10.00 5.00 0.00 0 5000 10000 15000 20000 25000 Used Bandwidth (MHz) = 1/2 PAM Symbol Rate (Mbaud) 23