Power Trends in Networking Electronics and Computing Systems Herman Chu Technical Leader, Cisco Systems 2/16/06 1
Topics Design trade-offs Power densities Cooling technologies Power reduction Looming governments involvement on equipment power 2
Customers are similar to consumers, they want everything Performance- High routing density, smallest package, rich software features, and fast hardware Low cost or free hardware AND quality product 7/24 customer service and support Minimize cost of ownership and operations Need to balance amongst all these areas 3
Physical Design Trade-offs PERRORMANCE Customers don t want efficient equipment, they want equipment with highperformance and high density that consumes low power COST POWER CONSUMPTION SIZE NOISE 4
Thermal Design for NEBS High port system design, thus limited space for inlet, exit, and plenum NEBS requirements: Temperature limits, storage: -40 to 70ºC Temperature limits, operating: 5 to 40ºC sustained and -5 to 50ºC (55ºC for equipment less than ½ rack height) for 96 hrs Altitude limits: -60 TO 1,800m at 40ºC and 1,800 to 4,000m at 30ºC. (Cisco product spec.: 5 to 40ºC and 0 to 3,000m) Fan filter: must for all forced convection systems, minimum dust arrestance of 80% (10-15% ASHRAE dust spot rating) Rear Exhaust Air Filter Power Plant and Inlet Air Plenum 5
Power Consumption Breakdown for a Typical Network Line Card Engine Power Consumption Allocation by Subsystems Custom ASICs Power Consumption Allocation by Component Type Fabric Support Chips Misc. Misc. CPU SP Power Conv Memories CPU SP Power Conv 6
System Rack Level Power Trends Rack Power Density (kw/rack) Zone D Zone C Zone B Zone A High-Port Density Router and Switches High-Performance Density Servers Severity Level ZONE D- SEVERE Facility Level and System Level ZONE C- HIGH Facility Level Involvement ZONE B- GUARDED Case-by-Case Assessment ZONE A- LOW Required Cooling Solutions Outside the Equipment Localized liquid cooling at the system Pre-cooling of inlet air at the rack with chilled water Cooling the equipment room down to 10-15ºC Space out racks Reduced max. allowable number of units/rack Derate max. allowable temperature Business as usual 1990s TIME 2000s Normalized to fully loaded 19 rack (7 tall) 7
Problem : Breaking Limits Cooling Physical volume of air that can be delivered in a 7 tall 19 rack foot print Acoustic noise limits Power Efficiency Diminishing return on power consumption overhead to cooling capacity of the fans 8
Equivalent Rack Volume for Cooling Equivalent Rack Height More than FORKLIFT UPGRADE Additional facility space/ capacity needed Existing high performance data center approximate capacity Limit 2 1.5 1 rack Total Rack Power C B A Self contained air cooled a 19 rack of equipment Currently, customers and government regulators only pay attention to the total system rated power Customers are very aware of increasing power consumptions Push technology/physics and not cooling and power Key concerns for liquid cooling are RAS and orders of magnitude increase in design complexity and cost 9
Cooling Components Alternative Cooling Cold plates, evaporators, and pumps Liq. to Air Heat exchangers, condensers and radiators Air Cooling Copper or aluminum heat sink bases and Heat pipes Air to Air Heat spreaders, heat sinks and fans Heat Transport Fluid Medium Heat Transfer 10
Cooling Technology Roadmap Liq-Liq Cooling Technology Traditional Airto-Air Cooling Rack Level Enhanced Air Cooling, Derate Env and Subcool at Rack- Requires customer infrastructure change Increase Real-Estate- Space racks out and/ or reduce system card density Liquid Cooling Equipment- Requires equipment level liquid cooling implementation and customer infrastructure change Brute Force Air Cooling- Issues w/acoustic noise, airflow distribution, and inefficient use of air handlers Air-Air Rack Power Density 11
Liquid Cooling When the total rack power goes over Zone A Regardless whether staying with air cooling or moving to liquid cooling, the customers will need to increase floor space (realestate cost) to distribute the power load and add cooling capacity to cool the excess power load. Add cooling costs and complexities to both at system level and at room level. Cooling redundancy and leak detection will take up a lot of additional volume. Not a technology hurdle but RAS and cost (product cost and cost of ownership) concerns Reliability- Customer perception that pumps and heat exchangers are not reliable and fluid leaked at their facilities Availability- Need cooling redundancy on pumps and heat exchangers Serviceability- Need cooling bypass paths (pipes, valves, and software control) Shifting network and server equipment sweet spot. 12
Refrigerant Cooled Rack Example 13
Rack with Water Precooler/Chiller Example 14
Ideas On Power Measurement Matrix Total equipment power Maximum rated (Already published) Typical Custom configurations Normalized power to performance Power to performance per customer Index for overhead use of power 15
16