Improving Data Center Efficiency with Rack or Row Cooling Devices:



Similar documents
Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Guide to Minimizing Compressor-based Cooling in Data Centers

Data Center Airflow Management Retrofit

Variable-Speed Fan Retrofits for Computer-Room Air Conditioners

Case Study: Innovative Energy Efficiency Approaches in NOAA s Environmental Security Computing Center in Fairmont, West Virginia

Guideline for Water and Energy Considerations During Federal Data Center Consolidations

Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions

Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers

Wireless Sensors Improve Data Center Energy Efficiency

How To Improve Energy Efficiency Through Raising Inlet Temperatures

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001

White Paper. Data Center Containment Cooling Strategies. Abstract WHITE PAPER EC9001. Geist Updated August 2010

Data Center Environments

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers

Improving Data Center Energy Efficiency Through Environmental Optimization

How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems

Data Center Rack Level Cooling Utilizing Water-Cooled, Passive Rear Door Heat Exchangers (RDHx) as a Cost Effective Alternative to CRAH Air Cooling

The Different Types of Air Conditioning Equipment for IT Environments

DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences

Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED.

Reducing Data Center Energy Consumption

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

DATA CENTER COOLING INNOVATIVE COOLING TECHNOLOGIES FOR YOUR DATA CENTER

APC APPLICATION NOTE #92

International Telecommunication Union SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF TELECOMMUNICATION CABLES IN PUBLIC NETWORKS

Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers

Data Center Design Guide featuring Water-Side Economizer Solutions. with Dynamic Economizer Cooling

Blade Server & Data Room Cooling Specialists

Control of Computer Room Air Handlers Using Wireless Sensors. Energy Efficient Data Center Demonstration Project

Energy Efficient Data Centre at Imperial College. M. Okan Kibaroglu IT Production Services Manager Imperial College London.

Recommendation For Incorporating Data Center Specific Sustainability Best Practices into EO13514 Implementation

Data Center Components Overview

In Row Cooling Options for High Density IT Applications

Lawrence Berkeley National Laboratory Lawrence Berkeley National Laboratory

abstract about the GREEn GRiD

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

AIR-SITE GROUP. White Paper. Green Equipment Room Practices

Managing Data Centre Heat Issues

Power and Cooling for Ultra-High Density Racks and Blade Servers

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

DC Pro Assessment Tools. November 14, 2010

Data Center 2020: Delivering high density in the Data Center; efficiently and reliably

Data Center Technology: Physical Infrastructure

Analysis of data centre cooling energy efficiency

Blade Servers and Beyond: Adaptive Cooling for the Next Generation of IT Systems. A White Paper from the Experts in Business-Critical Continuity

Using Simulation to Improve Data Center Efficiency

How Row-based Data Center Cooling Works

I-STUTE Project - WP2.3 Data Centre Cooling. Project Review Meeting 8, Loughborough University, 29 th June 2015

Elements of Energy Efficiency in Data Centre Cooling Architecture

Energy Efficiency in New and Existing Data Centers-Where the Opportunities May Lie

Benefits of. Air Flow Management. Data Center

White Paper #10. Energy Efficiency in Computer Data Centers

Rittal White Paper 508: Economized Data Center Cooling Defining Methods & Implementation Practices By: Daniel Kennedy

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

Rack Hygiene. Data Center White Paper. Executive Summary

Increasing Energ y Efficiency In Data Centers

Rittal White Paper 507: Understanding Data Center Cooling Energy Usage & Reduction Methods By: Daniel Kennedy

Rittal Liquid Cooling Series

How To Run A Data Center Efficiently

Trends in Data Center Design ASHRAE Leads the Way to Large Energy Savings

Adaptive Cooling in Data Centers. Silicon Valley Leadership Group Energy Efficient Data Center Demonstration Project October, 2009

Choosing Close-Coupled IT Cooling Solutions

Energy Efficiency Best Practice Guide Data Centre and IT Facilities

APC by Schneider Electric

GREEN FIELD DATA CENTER DESIGN WATER COOLING FOR MAXIMUM EFFICIENCY. Shlomo Novotny, Vice President and Chief Technology Officer, Vette Corp.

National Grid Your Partner in Energy Solutions

QUALITATIVE ANALYSIS OF COOLING ARCHITECTURES FOR DATA CENTERS

Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc.

Data Centre Cooling Air Performance Metrics

The Efficient Enterprise. Juan Carlos Londoño Data Center Projects Engineer APC by Schneider Electric

Chiller-less Facilities: They May Be Closer Than You Think

Control of Computer Room Air Conditioning using Sensors in the IT Equipment. Energy Efficient Data Center Demonstration Project

APC APPLICATION NOTE #112

Data Realty Colocation Data Center Ignition Park, South Bend, IN. Owner: Data Realty Engineer: ESD Architect: BSA LifeStructures

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

The New Data Center Cooling Paradigm The Tiered Approach

Element D Services Heating, Ventilating, and Air Conditioning

Saving energy in the Brookhaven National Laboratory Computing Center: Cold aisle containment implementation and computational fluid dynamics modeling

Using Simulation to Improve Data Center Efficiency

Design Best Practices for Data Centers

Re Engineering to a "Green" Data Center, with Measurable ROI

Airflow Simulation Solves Data Centre Cooling Problem

Data Center Equipment Power Trends

IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT

Imperial College 3 March 2009 Green Data Centers: An Overview

Effect of Rack Server Population on Temperatures in Data Centers

11 Top Tips for Energy-Efficient Data Center Design and Operation

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Technology Corporation

Using CFD for optimal thermal management and cooling design in data centers

Warehouse-scale Design Reaches Retail Colocation High Density, Modern And Green

Demonstration of Data Center Energy Use Prediction Software

CANNON T4 MINI / MICRO DATA CENTRE SYSTEMS

Air, Fluid Flow, and Thermal Simulation of Data Centers with Autodesk Revit 2013 and Autodesk BIM 360

Energy and Cost Analysis of Rittal Corporation Liquid Cooled Package

Data Center Leadership In the DOE Better Buildings Challenge

GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY. Public Sector ICT Special Working Group

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

Paul Oliver Sales Director. Paul Oliver, Airedale Tom Absalom, JCA

Transcription:

Improving Data Center Efficiency with Rack or Row Cooling Devices: Results of Chill-Off 2 Comparative Testing Introduction In new data center designs, capacity provisioning for ever-higher power densities puts into question whether conventional room-conditioning systems can manage future information technology (IT) loads. Within existing data centers, computing capacity typically increases over time as it requirements increase resulting in increased power and cooling requirements. Data center operators are challenged to provide adequate support infrastructure that is provisioned, or adapts accordingly, to achieve future it mission requirements while minimizing energy use. To examine modular cooling solutions, Lawrence Berkeley National Laboratory (LBNL) in partnership with the Silicon Valley Leadership Group organized a comparison demonstration which became known as Chill-Off 2. Evolving Data Center Cooling As IT computing servers and devices continue to evolve, producing increasing amounts of heat, effective and efficient removal of this heat will need to increase. One approach to improving this efficiency is to locate the removal device closer to the heat generating source. New rack- and row-mounted cooling devices are designed to be installed on or near the server rack that are referred to close-coupled cooling devices. These devices could be used in addition to, or instead of, standard room-conditioning units. Therefore, data center operators could reduce overall data center energy consumption by using modular, closecoupled cooling devices. At a Glance Rack- and row-mounted, closecoupled cooling devices are more efficient than computer room cooling devices. Rack- and row-mounted devices are appropriate for legacy data center retrofit upgrades as well as for new data centers. Row- and rack-mounted devices are suitable for higher allowable server inlet air temperatures, per ashrae guidelines. During testing as server inlet air temperature is increased above 75 f, server fan energy generally increased. Rear-door, passive cooling devices have higher heat-removal efficiency than row and rack close-coupled devices, but rely on server fans to operate. Challenging conventional cooling systems Rack/row-mounted cooling devices can replace or supplement conventional cooling systems and result in energy savings. Conventional data center cooling is achieved with computer room air conditioners (CRACs) or computer room air handlers (CRAHs). These CRAC and CRAH units are typically installed in data centers on top of raised-floors that are used for cooling air distribution. Such under-floor air distribution is not required by the new rack/row-mounted devices. Consequently, the vagaries of under-floor airflow pathways for room conditioning are avoided. Importantly, close-coupled devices may be better suited for installation in legacy data centers where under-floor air distribution pathways are already cluttered and restricted. In a new data center, the cost of a raised floor can be eliminated. Comparing rack/row-mounted devices to conventional cooling Comparisons were made between room cooling units and rack/row-mounted devices that account for all conditioning energy used to maintain a set point temperature in a data center. In July 2009, a series of energy-efficiency tests were hosted by Oracle (previously SUN Microsystems) in Santa Clara, California to evaluate eleven rack/row-mounted cooling devices. The test project, called Chill-Off 2, was completed in March 2010. Complete results are available in a report to the California Energy Commission titled Demonstration of Rack-Mounted Computer Equipment Cooling Solutions. This report presents the evaluation methods and results of the testing of these cooling devices. Industry Participation Promoters of the comparison testing project successfully engaged multiple participants from the data center cooling industry. This allowed comparison of the newest close-coupled devices to typical room cooling/conditioning systems. The devices, which were provided by the original equipment manufacturers included varying design features but all could be described as close-coupled cooling solutions. In addition, other industry contributors provided support during the testing. A wireless sensing company contributed a system for monitoring temperature, humidity, fluid flow, electrical current and power, and operational status. Another industry partner contributed a robust data collection and reduction software suite that was invaluable during data reduction efforts. Performance Results As noted above, the eleven close-coupled computer rack cooling devices tested were either rack-level or aisle-containment type. Energy efficiency comparison metrics developed specifically for this

2 chilled water increased. In the test runs, chilled water supply ranged from 45 F (7.2 C) to 60f (15.5 C) and server air inlet temperature ranged from 60F (15.5 C) to 90f (32.2 C). For example, for all devices total energy savings of 5.3 percent was achieved by raising the chilled water temperature from 45 F (7.2 C) to 60 F (15.5 C) while the server air inlet temperature was held constant at 72 F (22.2 C). Combining higher chilled water temperatures with higher server air inlet temperatures allows increased use of free cooling to achieve even greater energy efficiency in data centers. Figure 1: Data Center Cooling Device Relative Performance. project results showed that all eleven devices tested can provide adequate cooling of rack mounted it equipment; see Figure 1. All devices performed within a narrow band of energy consumption. The total power use span, from low to high, was approximately thirteen percent for the complete set of test runs. Within one set of test conditions, the span between power use varied even less. For example, the test run with 55 F (12.7 C) chilled water and 72 F (22.2 C) server air inlet temperatures, the performance range from highest to lowest device was 8 percent. More efficient than conventional room cooling Chill-Off 2 project results showed that the tested devices are more energyefficient than conventional data center room cooling designs, which uses CRAC and CRAH units with under-floor cold air supply distribution. In support of the project, calculations were completed to estimate the energy-efficiency performance for a conventional data center cooling approach using CRAH units. Performance information for the conventional cooling design was obtained from a CRAH/CRAC device manufacturer. hot air over their heat exchange surfaces. These rear-door devices are more energy efficient than the other devices tested for a number of reasons including not using power for fans. Even though the rear-door devices impose a restriction to airflow that is overcome by the IT equipment fans, no energy efficiency reduction or other negative impact was noted as a result of this airflow restriction. A companion bulletin on the rear-door coolers is available at: http:// hightech.lbl.gov/documents/data_centers/rdhx-doe-femp.pdf. Improving efficiency using warmer chilled water During test runs, a significant improvement in cooling energy efficiency was recorded as the temperature of the Devices Tested The rack/row-mounted computer equipment cooling devices tested were evaluated and compared based on their cooling effectiveness and energy consumption. The close-coupled systems were of various designs but all transferred server heat to the building s chilled water system. Cooling devices tested were: rack coolers with air-towater heat exchanger, row-type rack cooler with air-to-refrigerant or air-towater heat exchanger, and rack rear-door passive cooler with air-to-refrigerant or air-to-water heat exchanger. In addition, a prototype direct-touch cooling system design, which conducts heat to a specially designed heat transfer plate in a specially designed rack using refrigerant as the heat transfer fluid. Also, a design consisting of a container-type enclosure cooled with chilled water were also tested for comparison, but not covered this bulletin. Passive rear-door cooler efficiency Some of the rack-mounted cooling devices tested do not have internal fans; they rely on server fans to move rejected Figure 2: Rack cooler.

3 Rack Coolers A rack cooler is defined herein as a device that includes an enclosure system for a small number of racks, typically one or two. Heated server exhaust air is recirculated within the enclosure and is thus prevented from entering the computer room. This device contains heat exchangers that cool the recirculated air and return the cooled air to the server air inlet. Rack coolers contain fans or blowers that provide the flow needed to overcome the pressure drop through the enclosure and across the heat exchanger. Figure 2 shows the thermal schematic for these devices. Figure 3: Row cooler Row Coolers A row cooler is defined herein as a device that is placed directly adjacent to computer racks either within the row between two server racks or above the racks. The row cooler gathers warmed server exhaust air from the rear of the rack. The warm air is then drawn into this device and cooled using an air-to-water or air-to-refrigerant heat exchanger. The cooled air is supplied near the server air inlet. Some manufactures install this type of device in conjunction with additional containment structures that reduce mixing of hot exhaust and cold supply air streams. All devices of this type were tested using optional containment structures. Figure 3 shows the thermal schematic for these devices. Rear-Door Coolers A rear-door cooler is defined herein as a device that cools hot air exiting the it server rack. The rear-door cooler device, which resembles an automobile radiator, is placed in the airflow outlet of a sever rack. Subsequently, the rear-door device removes heat from the air being forced through the rack by the server fans. Generally, a rear-door device exchanges this heat to circulating water from a chiller or cooling tower. This reduces the temperature of the resulting server-rack outlet air before it is discharged into the data center. Heat exchange is performed by either air-to-water or air-to-refrigerant designs. The tests of two water-based rear-door devices showed very little efficiency impact caused by different server air inlet temperatures. Figure 4 shows the thermal schematic for these devices. Direct-touch Cooler An example of a prototype device was demonstrated for comparison. Defined as a direct-touch cooler, the device cools hot electronic components located inside IT equipment directly, using conduction to refrigerant for heat removal. This design is unique because it uses custom modified servers with server chassis level fans removed thereby providing reduced power consumption for a given amount of computing. The efficiency results for this design are included in a separate report. Figure 5 shows the thermal schematic for these devices. Figure 4: Passive rear-door rack cooler. Evaluation Method The primary objective of this study was to investigate how the energy efficiency and performance of rack/row-mounted devices compared with conventional data center cooling solutions. An additional feature of the test regime was to study the energy impact of varying chilled water supply temperature and server air inlet temperature. Seven different combinations of chilled water supply and server air inlet temperatures were developed, to investigate how each device would perform under varying conditions.

4 Test Run # Chilled water supply temperature (cwst) Server air inlet temperature (siat) 1 45 f cwst 60 f siat 2 45 f cwst 72 f siat 3 50 f cwst 72 f siat 4 55 f cwst 72 f siat 5 60 f cwst 72 f siat 6 60 f cwst 80.6 f siat 7 60 f cwst 90 f siat Figure 5: Direct-touch cooler. Testing Approach A comparative evaluation of coolingdevice technologies required the following considerations: Develop a simplified test process that is easily repeated. Structure a level playing field of stipulated constraints. Produce data sets that are verifiable. Delineate goals that are clearly revealed when achieved. Testing Goals Develop a controlled test environment to enable equal, comparative testing between different solutions. Test each system in an identical manner to produce comparative, unbiased data and share these data with the industry. Evaluate current cooling technology efficiencies with realistic compute loads. Compare cooling device performance with various server air inlet and cooling water temperatures with a varying work load. Testing Methodology Test Area Tests were conducted in an isolated area to minimize external influences. The test area had dedicated electrical power and chilled water for cooling devices & fan-coil unit. Data Collection Record power and thermal collection points in functioning cooling devices and server racks every 30 seconds. All other data were recorded at 1 minute intervals. Monitor temperature and cooling water flow data with wireless instrumentation. Store and present data gathered through dedicated software collection system. Workload Simulation Simulate varying compute-load by combing customized scripts wrapped into one executable file provided by supporters. Test Runs Stabilization period Ensured server-rack temperature steady with compute load Measured energy-balance comparing input electrical power to removed thermal-cooling. Monitored stability of exiting temperature from the server. Test period: a four-hour computeload test-run Varied chilled water supply temperature (CWST) and server inlet air temperature (SIAT): Completed seven test-runs for each cooling device per the following table: Initial observations Clearly observed increased server energy at higher SIATs. Existing internal server fan controls (algorithm routines) not accommodating ASHRAE recommended inlet air temperature range. Server fan speed will increase due to supply air temp, not actual onboard component temps. Server energy at idle 75% to 80% of full-compute energy. No power management enabled. Metrics Developed A number of energy-efficiency metrics and test parameters were developed and used for comparing relative cooling device performance. The energy efficiency metrics follow industry standard methods with some modifications. Three new metrics were used to calculate and compare energy efficiency that uses an algorithm for the quantity of cooling provided divided by the electrical power needed. Another metric includes the power required for the rack mounted electronic equipment thus providing a metric for total power required, which is compared to the quantity of cooling provided. See full report for additional information and explanations.

5 Efficiency Impacts All devices tested within the group were comparable in energy efficiency performance considering the total power required including the computer equipment power and the power needed to provide the cooling. The following characteristics impacted energy efficiency: IT server fan energy use Server fan operation is a function of server inlet air temperature that directly relates to and significantly varies by server make, model, and other features. At some point as server inlet air temperature is increased, the server fans will respond to cool the server. The amount of power used by these fans usually increases as the inlet air temperature increases due to server fans speeding up; see Figure 6. The average server power increased by 6.2 percent for tests that increased server inlet air from 72 F (22.2 C) to 80 F (26.6 C). In addition, if the chilled water temperature is increased, then it could lead to higher server air inlet temperatures causing an increase in it fan- power use. Therefore, when elevating chilled water temperature, an analysis of total power used may be warranted. Rack and row cooler observations During test runs, some in-row and rack cooling devices exhibited a significant increase in their fan power when server air inlet temperatures were raised from Figure 6: Passive rear-door rack cooler. 72 F (22.2 C) to 80 F (26.6 C) and 90 F (32.2 C). These devices may benefit from a review and modification of their fan speed control to reduce unnecessary air flow at high server inlet temperatures when data center operator elects to use a higher server inlet air temperature set point. For water cooled in-row and rack devices, a water-to-water cooling distribution unit (CDU) may not be needed when higher temperature chilled water is used. Omitting a CDU will increase overall energy efficiency and lower system first cost. Rear-door device performance Four rear-door devices were tested in two general categories: 1) water cooled and 2) refrigerant cooled. The water cooled type can perform slightly better than the refrigerant passive doors when a water-to-water CDU is not included. When a CDU is included as part of the design, some refrigerant devices perform better than the water cooled rear-door devices. Lessons Learned The largest consumer of power, in relation to the amount of IT power cooled, is the power needed to make the chilled water. Small increases in the chilled water temperature supply set point can provide large energy savings. Additional information is provided below. Chiller plant energy impact As stated, the major power-use component was the electrical power needed to make the chilled water. Chiller plant operating efficiencies vary greatly, and this should be considered when reviewing the results of this bulletin. Calculations resulting from the Chill-Off 2 project indicated that the power needed to make chilled water averaged 4 times greater than the power required for all other cooling-power-related components combined, e.g., device, CDU, and pump power. The total power required for cooling can be reduced 15 20 percent, in some cases, by raising the chilled water temperature from 49 F (9.4 C) to 54 F (12.2 C). Chilled water distribution energy savings Depending upon system design, the chilled water distribution pump power is not a large component of overall energy efficiency. Savings can be achieved easily when the water supply pressure is reduced to the lowest required level that optimally balances flow throughout the system, usually with a variable speed drive. In addition, careful planning of chilled water distribution systems using variable speed equipment and two-way modulating valves can: yield energy savings with existing equipment; improve energy efficiency; and stabilize operations. Heat removal with water or refrigerant All of the devices used chilled water for the final heat transfer, but some rack/ row-mounted devices used refrigerant as a primary heat transfer fluid, which was circulated using a low pressure drop system. The water-cooled, passive, rear-door heat exchangers had the best energy efficiency when installed without a waterto-water cooling distribution unit (CDU). When a water-to-water CDU was added to the water cooled devices, the refrigerant cooled passive devices had slightly higher energy efficiency compared to the water-cooled passive devices. The energy efficiency for refrigerant devices can be maximized by carefully matching the server heat load to the capability of the refrigerant to water CDU.

6 Acknowledgments This bulletin was prepared by Geoffrey Bell and reviewed by William Tschudi and Henry Coles at Lawrence Berkeley National Laboratory for the U.S. Department of Energy. We extend special thanks to the industry participants including cooler manufacturers and monitoring software company and to the host site for providing support throughout all phases of this project. Bulletin Primary Author Geoffrey C. Bell, P.E. Lawrence Berkeley National Laboratory One Cyclotron Road M.S. 90-3111 Berkeley, CA 94720 510.486.4626 gcbell@lbl.gov References Coles, H.; Demonstration of Rack- Mounted Computer Equipment Cooling Solutions, California Energy Commission, December, 2010. Contacts For more information on FEMP: Will Lintner, P.E. Federal Energy Management Program U.S. Department of Energy 1000 Independence Ave., S.W. Washington, D.C. 20585-0121 202.586.3120 william.lintner@ee.doe.gov For more information and resources, visit the FEMP website at www.femp.energy.gov. Printed with a renewable-source ink on paper containing at least 50% wastepaper, including 10% post consumer waste. CSO 23137 March 2012