C P I P O W E R M A N A G E M E N T WHITE PAPER Selecting Rack-Mount Power Distribution Units For High-Efficiency Data Centers By Anderson Hungria Sr. Product Manager of Power, Electronics & Software Published June 2014 800-834-4969 techsupport@chatsworth.com www.chatsworth.com While every effort has been made to ensure the accuracy of all information, CPI does not accept liability for any errors or omissions and reserves the right to change information and descriptions of listed services and products. 2014 Chatsworth Products, Inc. All rights reserved. Chatsworth Products, CPI, CPI Passive Cooling, econnect, MegaFrame, Saf-T-Grip, Seismic Frame, SlimFrame, TeraFrame, GlobalFrame, Cube-iT Plus, Evolution, OnTrac, QuadraRack and Velocity are federally registered trademarks of Chatsworth Products. Simply Efficient is a trademark of Chatsworth Products. All other trademarks belong to their respective companies. 6/14 MKT-020-613
Power consumption in the data center continues to be a rising trend. The need to provide redundant power systems with high reliability and availability of compute resources is a major driving force for the increase in power utilization. Some data centers use just as much power for non-compute or overhead energy like cooling, lighting and power conversions, as they do to power servers. The ultimate goal is to reduce this overhead energy loss, so that more power is dedicated to revenue-generating equipment, without jeopardizing reliability and availability of resources. There are many methods currently being implemented to reduce unnecessary power consumption in the data center high-efficiency servers, thermal containment, server inlet temperature increase and reducing power conversion loss. When used in combination, these approaches can deliver low Power Utilization Effectiveness (PUE) values and reduce energy expenses. As an added challenge, new trends in data center traffic highlight the importance of implementing energy efficiency techniques in facilities. According to the third annual Cisco Global Cloud Index (2012-2017), global data center traffic is expected to triple over the next five years a 25 percent combined annual growth rate (CAGR) to 7.7 zettabytes by the end of 2017 1. 9.0 25% CAGR 2012-2017 8.0 7.7 ZB Zettabytes / Year 7.0 6.0 5.0 4.0 4.2 ZB 5.2 ZB 6.4 ZB 3.0 3.3 ZB 2.0 2.6 ZB 1.0 0.0 2012 2013 2014 2015 2016 2017 Figure 1: The Cisco Global Cloud Index (2012-2017) estimates a 25% Combined Annual Growth Rate of data center networking traffic during the next five years. (Source: Cisco Global Cloud Index: Forecast and Methodology, 2012-2017) Increased virtualization and use of high-density devices, such as blades and switches require even more power. For these reasons, it s crucial to deploy a reliable and effective power distribution unit (PDU) at the cabinet level, which is the hottest place in the data center. 2
Managing Power Properly in the Data Center of the Future Power is the biggest expense in the data center, most of it being used to cool and keep these facilities at a temperature that prevents servers or devices from overheating. One way to be more energy efficient is to implement a containment strategy, and then raise the temperature in the cold aisle from the traditional setting of between F (15 C) and 70 F (21 C) to a higher temperature between 80 F (27 C) and 85 F (29 C). Thermal Guidelines for Data Processing Environments, part of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Datacom Series, defines recommended and allowable environmental ranges (classes), as shown in (Figure 2) 2 below. Under certain conditions, data centers can save 4 to 5 percent in energy costs for every 1 degree Fahrenheit increase in server inlet temperature, according to the U.S. Environmental Protection Agency, Department of Energy, Energy Star program 3. 80 100% 90% 80% 70% % A3 50% A4 % 80 35 45 50 55 65 70 75 CONSTANT N T RELATIVE HUMIDITY CONSTANT DEW POINT CONSTANT NT WET T BULB 35 45 50 55 65 70 75 80 85 90 95 100 105 110 115 A1 A2 30% 20% 10% ASHRAE -- Recommended 75 70 65 55 50 45 35 30 Dew-Point Temperature (ºF) Dry-Bulb Temperature (ºF) Figure 2: 2011 ASHRAE environmental classes for data center applications support higher equipment inlet temperatures. (Source: Image based on psychrometric charts in: Data Center Environments: ASHRAE¹s Evolving Thermal Guidelines, ASHRAE, ASHRAE Journal, vol. 53, no.12, December 2011 and ASHRAE, Thermal Guidelines for Data Processing Environments, Third Edition, Chapter 2 and Appendix F, 2012, ASHRAE recommended range is added) Of course, the underlying problem with keeping temperatures high in data centers is that devices could fail faster, though most IT equipment manufacturers say it s safe to raise intake temperatures to reduce overall facility energy use. Companies such as Facebook 4 and Google 5 have been proponents of this practice for a number of years. 3
Achieve Power Optimization with Free Cooling The path to achieving optimization with free cooling begins with good airflow management practices, as described in the U.S. Department of Energy s publication Best Practices Guide for Energy-Efficient Data Center Design 6. As a pioneer in Passive Cooling Solutions to promote free cooling in data centers, Chatsworth Products (CPI) brings an unmatched level of quality, expertise and efficiency to airflow management. CPI Passive Cooling was one of the first solutions to use comprehensive sealing strategies and a Vertical Exhaust Duct to maximize cooling efficiencies at the cabinet level. Now expanded to the aisle level, CPI Passive Cooling and Aisle Containment Solutions (Figure 3) allow data centers to increase heat and power densities by as much as four times their original level and increase cooling efficiency by nearly threefold. Return Air Plenum Hot Aisle Cold Aisle Hot Aisle Raised Floor Figure 3: Good airflow management separates cold and hot airflow pathways within the data center, leading to higher temperatures in the hot aisle where rack-mount PDUs are typically placed. Containment keeps hot and cold air separate within the data center computer room, allowing you to confidently raise room temperature. However, when airflow containment is utilized at either the cabinet or the aisle level, the temperature in the rear of the cabinet or in the hot aisle also becomes significantly higher, which must be taken into consideration when selecting in-cabinet PDUs. 4
The Importance of a PDU with a High-temperature Rating PDUs are usually installed in the back of cabinets behind hot air exhaust from equipment, which is potentially the hottest part of every data center (Figure 4). Depending on the expected ΔT from servers ranging from 25 F to 30 F (13 C to 20 C), the heat at the rear of cabinets or hot aisle containment can reach 110 F to 1 F (43 C to C). In this type of environment, there are very few devices that can continuously operate reliably and efficiently. Figure 4: Computational Fluid Dynamics (CFD) model showing hot aisle containment and resulting increase in hot aisle temperatures. The Solution: CPI econnect PDUs CPI econnect PDUs currently have the highest ambient operating temperature rating of any PDU in the market (Figure 5). econnect PDUs have been specially designed and tested for continuous operation in ambient air temperatures up to 149 F (65 C) to exceed the anticipated temperatures in a typical contained aisle. 70 50 Top Operating Temperatures of Rack-Mount PDUs (ºC) 65 52 45 Estimated range of hot aisle temperatures within contained solutions when inlet temperatures are raised to 80.6 F (27 C) and above. 30 20 10 0 Low Temp Geist APC/ ServerTech Eaton/ Raritan Chatsworth Products Figure 5: This graph shows the high end operating temperatures of several manufacturers rack-mount PDUs. The shaded area represents the estimated outlet temperatures for data center equipment, with a 25 F to 30 F (13.9 C to 16.7 C) ΔT, at maximum conditions for recommended and allowable environmental ranges established in the Thermal Guidelines for Data Processing Environments. 5
To keep the product safely operational at such high temperatures, strategically placed air vents, a larger power supply, high temperature components and other elements were included in the design. econnect PDUs comply to safety standards by the International Electrotechnical Commission and are UL Listed.* Figure 6: In addition to operating in high ambient temperature, econnect PDUs feature a compact PDU chassis that fits the small space behind the equipment mounting rails to minimize interference with exhaust airflow, alternating outlet groups to help distribute load more evenly on models with multiple breakers and a large, centrally located display for easy viewing. Additionally, the PDU has a small, compact size to minimize the space it occupies within the cabinet. On units with multiple breakers, the outlets are arranged in an alternating pattern so you distribute load more evenly as you plug in equipment. For intelligent PDUs, the LCD display is centrally located so that it is easy to read when the PDU is installed. It is possible to access the unit remotely using a web browser for setup, monitoring and control. IP consolidation allows access of up to 20 PDUs through a single Ethernet connection and IP address. Alternately, the PDU supports SNMP so that it can be monitored with third-party monitoring software. Conclusion econnect PDUs are the Simply Efficient solution to the ever-increasing demand for reliable power in the data center. econnect PDUs allow for remote access with optional monitoring and switching capabilities on outlets. With more than 180 standard configurations, including high-density models in 50A and A 208V that meet power loads of up to 17kW per unit, econnect PDUs withstand the heat loads of any hot aisle containment and are the market s best answer to the growing industrywide demand for High Performance Computing (HPC), virtualization and cloud computing. *IEC 950-1:2005, second edition, Information Technology Equipment Safety Part 1; UL Listed under category NWGQ: Information Technology Equipment Including Electrical Business Equipment, UL file number E212076. 6
References and Acknowledgements 2 American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), Technical Committee 9.9, Mission Critical Facilities, Technology Spaces and Electonic Equipment. 2012. Thermal Guidelines for Data Processing Environments, 3rd Edition. pp.12-15 and Appendix F. 3 U.S. Environmental Protection Agency. Department of Energy. Energy Star program. Server Inlet Temperature and Humidity Adjustments. http://www.energystar.gov/index.cfm?c=power_mgt.datacenter_efficiency_inlet_temp (or http://www.energystar.gov/datacenterenergyefficiency) 4 Facebook. 2011. Reflections on the Open Computer Summit webpage. Building the Next Open Data Center in North Carolina. https://www.facebook.com/note.php?note_id=10150210054588920. 1 Cisco Systems, Inc. 2013. Cisco Global Cloud Index: Forecast and Methodolgy, 2012-2017. pp.1-3. http://www.cisco.com/c/en/us/solutions/collateral/service-provider/global-cloud-indexgci/cloud_index_white_paper.html. (or http://www.cisco.com/c/en/us/solutions/serviceprovider/global-cloud-index-gci/index.html). 5 Data Center Knowledge. Miller. 2012. Too Hot for Humans, But Google Servers Keep Humming. http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-serverskeep-humming/. 6 U.S. Environmental Protection Agency. Department of Energy. Office of Energy Efficiency & Renewable Energy. Federal Energy Management Program. Best Practices Guide for Energy-Efficient Data Center Design. pp. 5-8. http://www1.eere.energy.gov/femp/pdfs/eedatacenterbestpractices.pdf. (or http://www.energy.gov/eere/femp/federal-energy-management-program). Anderson Hungria Sr. Product Manager Power, Electronics & Software, Chatsworth Products Anderson Hungria graduated from North Carolina State University with a Masters in Electrical and Computer Engineering. He has worked in the data center industry for seven years. Hungria has managed and introduced a variety of power distribution and monitoring products. He previously worked at Eaton Powerware in the Data Center Solutions and Three Phase Power Groups. Hungria is currently involved with managing Rack PDUs, UPS, Environmental Monitoring and Software at Chatsworth Products (CPI). 7