Variable-Speed Fan Retrofits for Computer-Room Air Conditioners



Similar documents
Improving Data Center Efficiency with Rack or Row Cooling Devices:

Case Study: Innovative Energy Efficiency Approaches in NOAA s Environmental Security Computing Center in Fairmont, West Virginia

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Better Cooling With Slower CRAC Fans

Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers

Drives and motors. A guide to using variable-speed drives and motors in data centres

Guide to Minimizing Compressor-based Cooling in Data Centers

Guideline for Water and Energy Considerations During Federal Data Center Consolidations

Control of Computer Room Air Handlers Using Wireless Sensors. Energy Efficient Data Center Demonstration Project

Energy Efficiency in New and Existing Data Centers-Where the Opportunities May Lie

Reducing Data Center Energy Consumption

State of the Art Energy Efficient Data Centre Air Conditioning

Data Center RD&D at EPRI

Data Center Airflow Management Retrofit

National Grid Your Partner in Energy Solutions

General Recommendations for a Federal Data Center Energy Management Dashboard Display

White Paper #10. Energy Efficiency in Computer Data Centers

Data Center Design Guide featuring Water-Side Economizer Solutions. with Dynamic Economizer Cooling

Drives and motors. A guide to using variable speed drives and motors in data centres Meeting your Carbon Reduction Commitment (CRC)

Drives and motors. A guide to using variable-speed drives and motors in retail environments

Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions

Wireless Sensors Improve Data Center Energy Efficiency

Analysis of data centre cooling energy efficiency

Energy Efficient High-tech Buildings

HVAC Systems: Overview

3/29/2012 INTRODUCTION HVAC BASICS

Energy Efficiency. Energy Efficient Home Cooling:

Creating Efficient HVAC Systems

Guidance for Calculation of Efficiency (PUE) in Data Centers

The Different Types of Air Conditioning Equipment for IT Environments

Local Dealer:

Guidance for Calculation of Efficiency (PUE) in Data Centers

Indoor coil is too warm in cooling mode or too cold in heating mode. Reversing valve or coil thermistor is faulty

Defining Quality. Building Comfort. Precision. Air Conditioning

Recommendation For Incorporating Data Center Specific Sustainability Best Practices into EO13514 Implementation

Design Guide. Retrofitting Options For HVAC Systems In Live Performance Venues

Glossary of Heating, Ventilation and Air Conditioning Terms

Chilled Water HVAC Systems

Data Center Metering and Resource Guide. Version 1.1. July 30th, Rod Mahdavi and Steve Greenberg. Lawrence Berkeley National Laboratory

I-STUTE Project - WP2.3 Data Centre Cooling. Project Review Meeting 8, Loughborough University, 29 th June 2015

VertiCool Space Saver 2 to 15 tons Water-Cooled and Chilled Water

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

A Cover HVAC ENERGY EFFICIENT

Section 2: Estimating Energy Savings and Incentives

Heat Recovery from Data Centres Conference Designing Energy Efficient Data Centres

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers

Data Center Industry Leaders Reach Agreement on Guiding Principles for Energy Efficiency Metrics

Recommendations for Measuring and Reporting Overall Data Center Efficiency

HVAC Efficiency Definitions

Data Center Energy Use, Metrics and Rating Systems

Commissioning - Construction Documents (Page 1 of 6)

Minimising Data Centre Total Cost of Ownership Through Energy Efficiency Analysis

Alan Matzka, P.E., Senior Mechanical Engineer Bradford Consulting Engineers

European Code of Conduct for Data Centre. Presentation of the Awards 2014

Lesson 36 Selection Of Air Conditioning Systems

Recommendations for Measuring and Reporting Overall Data Center Efficiency

Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED.

COMMERCIAL HVAC CHILLER EQUIPMENT. Air-Cooled Chillers

HVAC Processes. Lecture 7

The Data Center Dilemma - A Case Study

DC Pro Assessment Tools. November 14, 2010

Life Cycle Costing Analysis of Water-cooled Chillers. Chillventa Nuremburg, Germany

Heating, Ventilation & Air Conditioning Equipment

How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems

Failure code manual. content

KINGDOM OF SAUDI ARABIA SAUDI ARABIAN STANDARDS ORGANIZATION SASO. SAUDI STANDARD DRAFT No. 3457/2005

Data Center. Ultra-Efficient chilled water system optimization. White paper. File No: Date: december 03, 2015 Supersedes: new Date: new

In Row Cooling Options for High Density IT Applications

How To Design A Building In New Delhi

It will be available soon as an 8.5 X 11 paperback. For easier navigation through the e book, use the table of contents.

Presentation Outline. Common Terms / Concepts HVAC Building Blocks. Links. Plant Level Building Blocks. Air Distribution Building Blocks

Self-benchmarking Guide for Data Center Infrastructure: Metrics, Benchmarks, Actions Sponsored by:

ENERGY EFFICIENT HVAC DESIGN FOR WARM-HUMID CLIMATE CLIMATE

Data Realty Colocation Data Center Ignition Park, South Bend, IN. Owner: Data Realty Engineer: ESD Architect: BSA LifeStructures

Liebert Hiross HPW The High Performance Wallmount Cooling Solutions for Telecom Mobile Remote Access Nodes

HVAC Applications. VFD considerations for HVAC systems...

Managing Data Centre Heat Issues

MAC-120HE-01 Air-Cooled Chiller

STULZ Water-Side Economizer Solutions

- White Paper - Data Centre Cooling. Best Practice

Calculating Total Cooling Requirements for Data Centers

AIR CONDITIONING EFFICIENCY F8 Energy eco-efficiency opportunities in Queensland Foundries

HVAC Preventative Maintenance Basics. Presented by Mike Crawley

Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc.

Midea Commercial Air Conditioner Malfunction Handbook

New Deluxe Wall Mounted Heat Pump Series EXTERIOS

SECTION HEATING, VENTILATION AND AIR CONDITIONING EQUIPMENT

Technical Systems. Helen Bedford

Transcription:

Variable-Speed Fan Retrofits for Computer-Room Air Conditioners Prepared for the U.S. Department of Energy Federal Energy Management Program Technology Case Study Bulletin By Lawrence Berkeley National Laboratory Steve Greenberg September 2013

Contacts Steve Greenberg Lawrence Berkeley National Laboratory One Cyclotron Road, 90R3111 Berkeley, California 94720 (510) 486-6971 segreenberg@lbl.gov For more information on FEMP, please contact: Will Lintner, P.E., CEM Federal Energy Management Program U.S. Department of Energy 1000 Independence Ave. S.W. Washington, D. C. 20585-0121 (202) 586-3120 william.lintner@ee.doe.gov 2

Acknowledgements EPRI: Dennis Symanski, Brian Fortenbery Synapsense: Garret Smith, Patricia Nealon Vigilent: Corinne Vita NetApp: Ralph Renne LBNL: Ed Ritenour, Mike Magstadt, William Tschudi 3

Table of Contents 1 Executive Summary... 5 2 Introduction... 5 3 The Opportunity... 6 4 Steps for implementing a variable-speed drive fan retrofit... 7 4.1 Install variable-speed drives...7 4.2 Install monitoring and control system...10 4.3 Start-up, Test, and Commission the System...10 5 Results... 11 5.1 Energy savings potential...11 5.2 Effect on operations...11 6 Conclusion... 12 References... 13 4

1 Executive Summary The purpose of this case study is to provide concepts for more cost-effective cooling solutions in data centers, while keeping in mind that the reliability of computing systems and their respective cooling systems is always a key criterion. This case study documents three retrofits to existing constant-speed fans in computer-room air conditioners (CRACs), all located in California: first, a forty year-old, 6,000 ft 2 data center located at the Lawrence Berkeley National Laboratory (LBNL) in Berkeley, with down-flow, water-cooled CRACs; second, a three year-old, 1,000 ft 2 data center owned by NetApp in Sunnyvale, with up-flow, water-cooled units; and, third, a twelve year-old, 1300 ft 2 data center owned by the Electric Power Research Institute (EPRI) in Palo Alto, with down-flow, air-cooled units. Prior to these retrofits, it was widely assumed that conventional CRACs with reciprocating compressors and direct-expansion cooling coils needed to operate with constant fan speed, in order to keep the coils from freezing and other operational reasons. Case studies of retrofitted variable-speed controls in three data centers, using two different vendors of wireless controls, demonstrated that variable speed operation is not only possible, but results in significant energy savings and equal or improved cooling and reliability. The range of cooling system energy use reduction was 22-32% when compared to the constant-speed fan case. These systems, including the monitoring and controls needed for proper operation, have simple payback periods under two years. 2 Introduction This case study is one in a series created by the Lawrence Berkeley National Laboratory (LBNL) for the Federal Energy Management Program (FEMP), a program of the U.S. Department of Energy. Geared towards engineers and data center Information Technology (IT) and facility managers, this case study provides information about technologies and practices to use in operating and retrofitting existing data centers for sustainability. This case study documents three retrofits to existing constant-speed fans in computer-room air conditioners (CRACs), all located in California: first, a forty year-old, 6,000 ft 2 data center located at LBNL with down-flow, water-cooled CRACs; second, a three year-old, 1,000 ft 2 data center owned by NetApp in Sunnyvale, with up-flow, water-cooled units; and, third, a twelve year-old, 1300 ft 2 data center owned by the Electric Power Research Institute (EPRI) in Palo Alto, with down-flow, air-cooled units. The purpose of this case study is to provide concepts for more cost-effective cooling solutions, while keeping in mind that the reliability of computing systems and their respective cooling systems is always a key criterion. 5

3 The Opportunity Computer room air conditioners (CRACs) are often used to provide cooling to the Information Technology (IT) loads, especially in legacy data centers. CRACs use a vapor-compression refrigeration cycle to remove heat from the data center air, and pump it either into water (in turn cooled by a cooling tower or a dry cooler located outdoors) or directly into the outdoor air with a remote condenser with fan. While the compressors in the CRAC units are the largest energy consumer of the cooling system, the fans located in the CRACs (used to circulate room air through the units cooling coils) are also large energy users. CRACs were historically designed to operate at one fan speed even though many units have multiple (up to 4) stages of cooling; this scheme simplifies the internal controls of the CRAC (the risk of freezing is minimized and overall CRAC control stability is made easier) but running the fans at full speed even at reduced cooling loads wastes energy. CRAC fan systems generally follow the fan laws, i.e. the power required by the fan is proportional to the cube of the fan speed. If the efficiencies of the fan, motor, and belt drive all remain constant, the electrical input power will scale the same way. For example, if a fan requires 6.5 kw at 100% speed, it will require only 3.3 kw at 80% speed, since 0.8 3 is 0.51 (see Figure 1). Not only will the fan power be reduced, but since the fan power all becomes heat in the space and must be removed by the cooling system, there are significant secondary savings in the compressors and heat rejection system as well. Figure 1. Fan power as a function of speed (the cube law ). (Credit: Smith, 2013) It should be noted that computer-room air handlers (CRAHs) which have a filter, cooling coil using chilled water, and a fan are not the subject of this case study. However, because they do not have the same operational constraints that CRACs have, CRAHs typically have even better savings potential from fan speed control retrofits. See Coles et al, 2012 for a case study on CRAH retrofits; note also the retrofit included replacing the fans themselves with more-efficient plenum fans for additional savings, a retrofit that would apply equally well to CRACs. 6

While there are no retrofit kits available (from the original manufacturer) for the CRACs with reciprocating compressors that were the subject of this study, CRACs with scroll compressors are available with VSD fans, and there also are factory retrofit kits for these more recent CRACs. Maintenance for the fan components (bearings, motors, belts and sheaves) is also required, so operating them at reduced speed results in less wear and thus additional operational cost savings. 4 Steps for implementing a variable-speed drive fan retrofit These demonstrations show that existing CRAC fans can be successfully retrofitted with variable-speed drives. There are common steps that each of the three projects took, and other steps were applied on a case-by-case basis. 4.1 Install variable-speed drives A variable-speed drive (VSD) was added to the wiring to the fan motor of each CRAC. VSDs, also known as variable-frequency drives (VFDs), are commodity power electronic devices, commonly used for HVAC fans and pumps and numerous industrial applications. They work by converting the incoming 60 Hz alternating current (AC) power supplied by the utility to direct current and then, in the inverter section, creating a variable-frequency, variable-voltage AC output that is used to operate the motor at variable speed. Existing standard-efficiency AC induction motors can be used (as was the case at NetApp and EPRI), while upgrading to premium-efficiency motors will increase reliability and save additional energy as well (LBNL replaced their CRAC motors). Newer motors have insulation systems that are more resistant to degradation from the voltage spikes produced by VSDs, so they will be more durable than older motors, though if the VSD is kept close to the motor (as was the case in all three retrofits), this potential problem is minimized. VSDs can be used in their basic form (without a bypass; the motor can only operate through the VSD) as was done at LBNL and NetApp, or with a bypass (the motor can also be operated at full speed with the VSD out of the circuit). See Figures 2 and 3. Bypasses enable extra redundancy, in that if the VSD fails, the motor can still be operated, but the bypass adds size and extra cost. Since there are several other single points of failure in typical fan applications, many VSD installations are done with basic drives. The VSD can be mounted inside the CRAC, with deference to condensate dripping off the cooling coil (Figure 2), on a nearby wall or column (Figure 3), or under the floor (Figure 4). 7

Figure 2. VSD installation: plain VSD mounted inside the CRAC (Credit: Symanski, 2012). 8

Figure 3. VSD installation: VSD with bypass mounted on wall near the CRAC (Credit: Symanski 2012). Figure 4. VSD installation: plain VSD mounted under the floor near the CRAC (Credit: Ritenour, 2012) 9

4.2 Install monitoring and control system Since the CRAC internal controls did not include fan speed, an external controller was needed. While completely hard-wired control systems exist from a variety of vendors, the installation cost in an existing center is high relative to wireless options. In addition, wireless sensor vendors already have data-center monitoring systems available, and their fan control options are a relatively easy addition to a monitoring scheme. See DOE 2010 for a discussion of wireless sensor systems. In the EPRI case, a Vigilent system provided full monitoring and control. In the NetApp case, a Vigilent system provided monitoring and interfaced with the in-house Automatic Logic Corporation (ALC) system for VSD control. In the LBNL case, a Synapsense system provided full monitoring and control. Monitoring of supply and return temperatures was included with all systems, but they used the CRACs internal controls for temperature control, i.e. ramping up and down the staging of the CRAC cooling systems. The control system needs to monitor temperatures in the data center space to ensure that the IT cooling loads are met at no higher than the required inlet temperature. Typically an interface between the CRAC controls and the external controller is appropriate in order for the fan speed control to monitor conditions in the CRAC, which can also allow resetting of the temperature control setpoint of the internal CRAC controller. The Synapsense system included additional temperature sensors added to the CRAC s evaporator and condenser to ensure that the refrigerant compressor suction pressure doesn t go too low or that the compressor discharge pressure doesn t go too high. Since the pressure and temperature are not independent in the evaporator, and likewise in the condenser, temperatures (easily measured with clamp-on sensors) are used as proxies for the pressures. The Vigilent system included an additional temperature sensor for each CRAC s evaporator. Both systems ramp up fan speeds as needed to prevent the coil temperature from getting so low as to risk freeze-up. 4.3 Start-up, test, and commission the system As with any new equipment, the VSDs and their controls need to be programmed, started, tested, and commissioned to make sure they operate according to a sequence of operation, including the full range of load under normal operation, and under identified fault conditions to make sure that appropriate action is taken to protect equipment and send alarms to operators and maintenance personnel as appropriate. One issue that was common was condensation within the CRACs. Typically, these units operate with wet evaporator (cooling) coils, which mean condensation is dripping off the coils, and should be taken care of with the internal drip pan and drainage. At LBNL, it was noted that there was some carry-over of water into the air stream; this was a reason for locating the VSD outside of the CRAC. 10

5 Results Results from the three retrofits are described below. In summary, all systems saved significant amounts of energy and provided equal or better cooling service. 5.1 Energy savings potential Variable-speed fan retrofits can significantly reduce energy consumption and improve power utilization effectiveness in data centers. The results of these retrofits show significant energy savings for each installation. A commonly-used metric for measuring the performance of data center infrastructure is Power Usage Effectiveness, or PUE, which is the ratio of the total energy used by the data center divided by the energy used by the IT equipment (Green Grid, 2012). Thus, the lower the PUE, the better, with a PUE of 1.0 the theoretical minimum assuming no energy overhead is used for cooling, power distribution or lighting; PUEs below 1.1 have been demonstrated, while values in the range of 1.5 to 2.0 are very common. Constituents of the PUE overhead can be expressed using fractional values; for example if the HVAC system uses 0.25 times as much energy as the IT equipment, the power distribution losses use 0.05 times the IT, and lighting uses 0.01 times the IT, adding these together results in a total overhead of 0.31 and thus a PUE of 1.31. While PUE numbers are formally taken as whole-year ratios of energy, informally (and in this case study) ratios of power can also be used. At LBNL, the variable-speed fan controls caused a drop of 6 kw (159 to 153 kw) in the cooling power, even though the IT power increased by about 100 kw (353 to 454 kw) over the same period. This represents an improvement in the cooling overhead from 0.45 to 0.34, a 24% reduction in cooling power (Ritenour, 2012; Smith et al, 2013). The simple payback period for the system, including the monitoring and controls for the data center, was under 2 years (Nealon, 2013). At EPRI, the fan controls saved up to 19% including the fans and compressor power (located in the CRACs themselves), and 31% when the outdoor units were included. The outdoor units are refrigerant-to-air condenser coils with fans; the reduced heat rejection meant less condenser fan power, adding to the savings. The overhead of the cooling system (again, as a fraction of IT input power) was improved from 0.64 to 0.44. (Symanski, 2012 and 2013). The simple payback period for the system, including the monitoring and controls for the data center, was under 2 years (Vita, 2013). At NetApp, results were nearly identical to those at EPRI. 5.2 Effect on operations The wireless monitoring systems greatly improved the ability to check the temperature control status of the data centers. Combined with the wireless or hybrid control of the airflow through the CRACs and thus to the IT loads, the service level is increased because the system can dynamically respond to meet the changing loads in a data center or to compensate for a malfunctioning CRAC. 11

At LBNL, the CRACs were manually operated with just the needed number ( N ) of units, which meant a failure needed to be manually responded to. With the automatic control system, N+1 units are now operated, typically with reduced fan speed, which improves the redundancy and reliability of the cooling system. 6 Conclusion Prior to these studies, it was widely assumed that conventional computer-room air conditioners with reciprocating compressors and direct-expansion cooling coils needed to operate with constant fan speed in order to keep the coils from freezing and for other operational reasons. Case studies of retrofitted variable-speed controls in three data centers, using two different vendors of wireless controls, demonstrated that not only is variable speed operation possible, but results in significant energy savings and equal or improved cooling and reliability. The range of cooling system energy use reduction was 22 to 32% when compared to the constant-speed fan case. These systems, including the monitoring and controls needed for proper operation, have simple payback periods under two years. 12

References Coles, H., S. Greenberg (Lawrence Berkeley National Laboratory) and C. Vita (Vigilent) 2012. "Demonstration of Intelligent Control and Fan Improvements in Computer Room Air Handlers", Berkeley, CA: Lawrence Berkeley National Laboratory. LBNL-6007e. November. Available at http://hightech.lbl.gov/library DOE, 2010, Wireless Sensors Improve Data Center Energy Efficiency. Technology Case Study Bulletin, September. Available at http://www1.eere.energy.gov/femp/pdfs/wireless_sensor.pdf Green Grid, 2012, White Paper #49, PUE: A Comprehensive Examination of the Metric, V. Avelar et al, eds., 2012. Available at http://www.thegreengrid.org/en/global/content/whitepapers/wp49-pueacomprehensiveexaminationofthemetric Nealon, 2013, personal communication from Patricia Nealon of Synapsense to Steve Greenberg, May 28. Ritenour, E. (Lawrence Berkeley National Laboratory), 2012. LBNL IT Computing Facilities Variable Speed CRACs: How Efficient is That!, presentation at the Silicon Valley Leadership Group Data Center Efficiency Summit, Sunnyvale, CA, October 24. Available at http://svlg.org/wp-content/uploads/2012/10/11-ed-ritenour-lbnl-website.pdf Smith, G., Inniss, D. (Synapsense) and S. Greenberg (Lawrence Berkeley National Laboratory), 2013. Navigating Tradeoffs between Reliability and Energy Efficiency: A Case Study of Automated CRAC Control in a Medium-Sized Legacy Data Center. In press. Symanski, D. (Electric Power Research Institute), 2012. Better Cooling With Slower CRAC Fans, presentation at the Silicon Valley Leadership Group Data Center Efficiency Summit, Sunnyvale, CA, October 24. Available at http://svlg.org/wp-content/uploads/2012/10/11-dennis- Symanski-Vigilent-WEBSITE.pdf Symanski, D. (Electric Power Research Institute), 2013. Data Center Power Quality and Energy Efficiency, presentation at the Moffett Park Power Quality Workshop, Sunnyvale, CA, January 23. Vita, 2013, personal communication from Corinne Vita of Vigilent to Steve Greenberg, June 3. 13

DOE/EE-0971 September 2013 Printed with a renewable-source ink on paper containing at least 50% wastepaper, including 10% post consumer waste.