NSIDC Data Center: Energy Reduction Strategies

Similar documents
Green Data Center: Energy Reduction Strategies

Direct Fresh Air Free Cooling of Data Centres

Data Center Retrofit Project: 90% Energy Use Reduction on a Budget

Data Center Design Guide featuring Water-Side Economizer Solutions. with Dynamic Economizer Cooling

Legacy Data Centres Upgrading the cooling capabilities What are the options?

Case Study: Innovative Energy Efficiency Approaches in NOAA s Environmental Security Computing Center in Fairmont, West Virginia

Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions

Award-Winning Innovation in Data Center Cooling Design

Improving Data Center Energy Efficiency Through Environmental Optimization

Environmental Data Center Management and Monitoring

Reducing Data Center Loads for a Large-Scale, Net Zero Office Building

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers

White Paper #10. Energy Efficiency in Computer Data Centers

How To Improve Energy Efficiency Through Raising Inlet Temperatures

How To Run A Data Center Efficiently

How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems

Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers

Guide to Minimizing Compressor-based Cooling in Data Centers

Climate solutions. evaporative cooling, adiabatic humidification and programmable controllers. energy saving solutions for data centers. carel.

International Telecommunication Union SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF TELECOMMUNICATION CABLES IN PUBLIC NETWORKS

Energy Efficient Data Centre at Imperial College. M. Okan Kibaroglu IT Production Services Manager Imperial College London.

GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY. Public Sector ICT Special Working Group

Trends in Data Center Design ASHRAE Leads the Way to Large Energy Savings

Analysis of data centre cooling energy efficiency

Presentation Outline. Common Terms / Concepts HVAC Building Blocks. Links. Plant Level Building Blocks. Air Distribution Building Blocks

Reducing Data Center Energy Consumption

Defining Quality. Building Comfort. Precision. Air Conditioning

Heat Recovery from Data Centres Conference Designing Energy Efficient Data Centres

Top 5 Trends in Data Center Energy Efficiency

I-STUTE Project - WP2.3 Data Centre Cooling. Project Review Meeting 8, Loughborough University, 29 th June 2015

Data Center Facility Basics

How to Save Over 60% on Cooling Energy Needed by a Data Center? The Roadmap from Traditional to Optimized Cooling

Managing Data Centre Heat Issues

AIR-SITE GROUP. White Paper. Green Equipment Room Practices

Energy Efficient Server Room and Data Centre Cooling. Computer Room Evaporative Cooler. EcoCooling

DATA CENTER COOLING INNOVATIVE COOLING TECHNOLOGIES FOR YOUR DATA CENTER

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Recommendation For Incorporating Data Center Specific Sustainability Best Practices into EO13514 Implementation

Data Center Design and Energy Management

Maximize energy savings Increase life of cooling equipment More reliable for extreme cold weather conditions There are disadvantages

Minimising Data Centre Total Cost of Ownership Through Energy Efficiency Analysis

By Tom Brooke PE, CEM

CRITICAL COOLING FOR DATA CENTERS

Design Best Practices for Data Centers

DOE FEMP First Thursday Seminar. Learner Guide. Core Competency Areas Addressed in the Training. Energy/Sustainability Managers and Facility Managers

Guideline for Water and Energy Considerations During Federal Data Center Consolidations

Challenges In Intelligent Management Of Power And Cooling Towards Sustainable Data Centre

Measuring Power in your Data Center: The Roadmap to your PUE and Carbon Footprint

Server room guide helps energy managers reduce server consumption

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Element D Services Heating, Ventilating, and Air Conditioning

Data Centre Cooling Air Performance Metrics

Data Realty Colocation Data Center Ignition Park, South Bend, IN. Owner: Data Realty Engineer: ESD Architect: BSA LifeStructures

DataCenter 2020: first results for energy-optimization at existing data centers

- White Paper - Data Centre Cooling. Best Practice

White Paper. Data Center Containment Cooling Strategies. Abstract WHITE PAPER EC9001. Geist Updated August 2010

How To Improve Energy Efficiency In A Data Center

Glossary of Heating, Ventilation and Air Conditioning Terms

Data Center Precision Cooling: The Need For A Higher Level Of Service Expertise. A White Paper from the Experts in Business-Critical Continuity

Engineering White Paper UTILIZING ECONOMIZERS EFFECTIVELY IN THE DATA CENTER

Bring the fresh in. Dedicated Outside Air Systems

A SOLAR GUIDE - EVERYTHING YOU NEED TO KNOW

Energy Recovery Systems for the Efficient Cooling of Data Centers using Absorption Chillers and Renewable Energy Resources

Benefits of. Air Flow Management. Data Center

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

AEGIS DATA CENTER SERVICES POWER AND COOLING ANALYSIS SERVICE SUMMARY

Evaporative Cooling. Terminology. Date: February 28, Concepts

The Data Center Dilemma - A Case Study

Recommendations for Measuring and Reporting Overall Data Center Efficiency

European Code of Conduct for Data Centre. Presentation of the Awards 2014

International Telecommunication Union SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF TELECOMMUNICATION CABLES IN PUBLIC NETWORKS

National Grid Your Partner in Energy Solutions

Lesson 36 Selection Of Air Conditioning Systems

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Energy Efficiency in New and Existing Data Centers-Where the Opportunities May Lie

The Different Types of Air Conditioning Equipment for IT Environments

THE ROOFPOINT ENERGY AND CARBON CALCULATOR A NEW MODELING TOOL FOR ROOFING PROFESSIONALS

The Benefits of Supply Air Temperature Control in the Data Centre

Greening Data Centers

ENERGY EFFICIENT HVAC DESIGN FOR WARM-HUMID CLIMATE CLIMATE

Data Center Industry Leaders Reach Agreement on Guiding Principles for Energy Efficiency Metrics

How High Temperature Data Centers and Intel Technologies Decrease Operating Costs

Creating Efficient HVAC Systems

ICT and the Green Data Centre

Paul Oliver Sales Director. Paul Oliver, Airedale Tom Absalom, JCA

Advanced Energy Design Guide LEED Strategies for Schools. and High Performance Buildings

How To Design A Building In New Delhi

Data Center Energy Efficiency and Renewable Energy Site Assessment: Anderson Readiness Center

Self-benchmarking Guide for Data Center Infrastructure: Metrics, Benchmarks, Actions Sponsored by:

State of the Art Energy Efficient Data Centre Air Conditioning

AIA Provider: Colorado Green Building Guild Provider Number: Speaker: Geoff Overland

How to Build a Data Centre Cooling Budget. Ian Cathcart

Data Centers: How Does It Affect My Building s Energy Use and What Can I Do?

Tim Facius Baltimore Aircoil

Recommendations for Measuring and Reporting Overall Data Center Efficiency

Sea Water Heat Pump Project

HVAC Systems: Overview

BCA-IDA Green Mark for Existing Data Centres Version EDC/1.0

Transcription:

NSIDC Data Center: Energy Reduction Strategies Airside Economization and Unique Indirect Evaporative Cooling Executive summary The Green Data Center Project was a successful effort to significantly reduce the energy use of the National Snow and Ice Data Center (NSIDC). Through a full retrofit of a traditional air conditioning system, the cooling energy required to meet the data center s constant load has been reduced by over 70% for summer months and over 90% for cooler winter months. This significant reduction is achievable through the use of airside economization and a new indirect evaporative cooling cycle. One of the goals of this project was to create awareness of simple and effective energy reduction strategies for data centers. Although this particular project was able to maximize the positive effects of airside economization and indirect evaporative cooling because of its geographic location, similar strategies may also be relevant for many other sites and data centers in the United States. Introduction Current developments and enhancements in digital technology and communication are causing significant growth in the information technology (IT) industry. Data centers provide a clean, safe, and stable environment for online servers to be maintained physically and virtually at all hours of the day. Secure facilities across the U.S. house anywhere from a few dozen to a few thousand servers. And in 200, there were over million servers installed in the United States. The above photo shows the multi-stage indirect evaporative cooling units installed at the National Snow and Ice Data center in May 20. Photo used with permission from the National Snow and Ice Data Center. Many of these computers are meticulously maintained for peak performance, and the physical environment where they are stored must be kept clean, free of static electricity, and maintained within specific air quality standards. So while improvements in energy efficiency are important, air quality and reliability are also critical to data centers. Background The NSIDC provides data for studying the cryosphere those areas of the planet where snow or ice exist. The two most extensive regions are Antarctica and the Arctic but high elevation regions are also important as they contain glaciers, and permanent ice and snow fields. The Center began operations at the University of Colorado in 976. Some of the data held at NSIDC indicates dramatic changes in the cryosphere over the past decade and many of these changes have been attributed to global warming. NSIDC has operated a Distributed Active Archive Data Center (DAAC) for NASA s Earth Observing System since 992. The data managed by the DAAC computer systems are distributed to scientists around the world. In order to deliver these cryospheric data to national and international clients, the data center must be online around the clock. The NSIDC computers are housed in a secure facility within a mixed use office and research building located in Boulder, Colorado and operates continuously. In 2008, the Uptime Institute said that annually, global data center carbon dioxide emissions are poised to equal that of the airline industry if current trends persist, and it estimated that data center carbon dioxide emissions will quadruple between 200 and 2020 [2]. The irony of the NSIDC s situation stems from the fact that the use of tools (the data center) necessary to study the problem (climate change) are actually contributing to the problem. When it became necessary to replace and upgrade the existing cooling infrastructure, all options were considered. And in particular, solutions that would reduce demand and dependency on the local utility provider were given high priority. Computers use a significant amount of electrical energy. Most of this energy is converted into heat within a computer that must then be exhausted. Otherwise, damage to

2 the electronics could occur. The American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE) has established an air state that is suitable for data centers and computer rooms 3. The limits on this psychrometric envelope are shown in Table and Fig.. Humidity control in a data center is critical to stable operation. Too much moisture in the air allows for condensation, possibly leading to damage from short circuits to the delicate integrated circuits, power supplies, and other hardware. Too little humidity allows for static charge accumulation and potential damage from a single, large discharge could be significant. Exceeding a certain ambient air temperature can cause thermal damage to equipment. Low ambient temperatures can cause circuits and electricity flow to slow down, thus impeding the productivity of the computer. To maintain an appropriate air state within the limits as described above, the type and control of the heating, ventilating, and air conditioning (HVAC) system for a data center is a vital consideration. The NSIDC aims to meet the ASHRAE 2008 Thermal Guidelines for Data Centers Allowable Class Computing Environment, as shown in Fig.. Internal review of manufacturer specifications revealed that all of the computer and information technology (IT) equipment in the data center would operate well under these conditions, and it would be unnecessary to control the system within the ASHRAE Recommended Environment standard, seen within the bounds of the Allowable Environment in Fig.. Traditional Data Center Cooling Infrastructure Traditionally, data centers have been cooled by standard (and often packaged or unitary) air conditioning systems Property Recommended Allowable Dry-Bulb High 80 F 90 F Temperature Low 64 F 59 F Moisture High 60% RH 64 F DP 80% RH 63 F DP Low 42 F DP 20% RH Table. 2008 ASHRAE Psychrometric Envelope for data centers. 0.06 RH=80% RH=50% Humidity Ratio (lbs H2O per lbs dry air) 0.04 0.02 0.00 0.008 0.006 0.004 0.002 h = 30 Btu/lb RH=20% 0.000 45 50 55 60 65 70 75 80 85 90 95 00 Dry Bulb Temperature (F) ASHRAE Class and Class 2 Computing Environment, Recommended ASHRAE Class Computing Environment, Allowable ASHRAE Class 2 Computing Environment, Allowable Figure. 2008 ASHRAE Allowable and Recommended Environment operating conditions for data centers.

3 that use a direct expansion (DX) heat removal process via a liquid refrigerant. This is the same type of cycle that runs the air conditioning in a car or runs a refrigerator. Although a robust and mature technology, the DX cycle has been under scrutiny for cooling data centers over the past decade as other technologies capable of producing the same amount of cooling with less energy requirement have entered the market and become economically viable. The DX cycle requires environmentally harmful, synthetic refrigerants (i.e., R-22) and substantial electrical energy to run a compressor. Of all the parts in a DX air conditioner, the compressor requires the most energy. Up until the past few years, typical data centers have been cooled by these packaged DX systems, commonly referred to as computer room air conditioners (CRAC). These systems have been used in the past due to their offthe-shelf packaging, compact size, and simple controls. However, as the need for energy use reduction and environmentally friendly technology becomes more and more prevalent, the DX systems for data centers are becoming antiquated. The NSIDC operated 2 CRAC units full time until June 20. Most data centers are connected to a regional utility grid, as this has traditionally been the most reliable way available to power critical computer systems. Power is delivered to an uninterruptible power supply (UPS), which conditions and delivers power to the computers in a data center. This unit also typically charges a large battery array for use during a power outage. This allows the data center to remain online for at least an hour after a power disruption. Reduced Energy Consumption System The Green Data Center Project was separated into two main components: server consolidation and virtualization, and installing a more efficient cooling system. The two separate data centers in the building were consolidated into one (see Figure 2) and reconfigured to a hot/ cold aisle arrangement. The latter is the focus of this document, but it is important to realize that some energy reduction was achieved by simply reducing the IT load. The new cooling system design includes a unique cooling system that uses both airside economization and a new air conditioner that uses the efficient Maisotsenko Cycle 4. Airside Economization Airside economization is not a new technology, but does add some complexity to the control system. Simply put, an airside economizer is just a control mode that allows the air handling unit (AHU) to cool the space solely with outdoor air when the outdoor air is cooler than the air in the space. This is commonly referred to as free cooling. In this mode, no air conditioning (DX or other process) is required, and recirculation of room air is reduced to a minimum. As stated previously, humidity is an issue for computers and electronic systems. In the southeastern part of the United States, economizers may be limited due to the hot and humid climate. However, Colorado is much drier year round and cool enough for over 6 ORIGINAL CONFIGURATION NEW CONFIGURATION No generator backup UPS Low efficiency dispersed air flow Condensers Closed system cooling CRAC Solar Power for cooling & backup Expanded UPS Venting Curtains CRAC (backup only) Fresh air economizer & indirect evaporative cooling Infrastructure on 2 floors CRAC High efficiency hot/cold aisles Uneven cooling of hardware Redundant Data Center Space Distributed Data Center Server & data storage racks Consolidated Data Center Consolidated IT facility, with upgraded indirect evaporative cooling, improved airflow and venting, solar powered uninterruptible power supply (UPS), hot aisles and server virtualization. Virtual Servers NSIDC RL-2 Green Data Center Project NFS ARRA Project 0963204 Figure 2. These two images show before (left) and after (right) NSIDC data center configurations.

4 NSIDC hot aisle curtains. Photo used with permission from the National Snow and Ice Data Center. months of the year, so an airside economizer is a viable option for data centers to maintain an air state that is within the ASHRAE limits. Indirect Evaporative Cooling The centerpiece of the new system revolves around a series of multi-stage indirect evaporative cooling air conditioners that utilize the Maisotsenko Cycle. This cycle uses both direct and indirect evaporative cooling to produce a supply air state that is indirect evaporative cooling to produce a supply air state that is at the dewpoint, which is typically 30 to 40 Fahrenheit below the incoming air temperature in Colorado. A patented heat and mass exchanger (HMX) divides the incoming air stream into two streams: working air (WA) and product air (PA). The WA is directed into channels with wetted surfaces and cooled by direct evaporation. At the same time, adjacent channels carry PA without any water added from the wetted surfaces. The adjacency of these airstreams allows for indirect evaporative cooling of the PA stream. Heat is transferred to the WA stream by evaporation from an impermeable surface in contact with the PA stream. Cool PA is delivered directly to the computer room. Working air, warm and moisture saturated, is ultimately exhausted to the outside, or directed into the space when room humidity is below the humidity setpoint. The room humidity can drop below 25% relative humidity if the outside humidity is very low (which happens often in Colorado), and the AHU is in economizer mode, providing a significant amount of outdoor air to the room. In the winter months, this effect is even more pronounced and WA is directed into the space most of the time. Note that this cooling process does not have a compressor or condenser in its cycle. Air from the AHU can now be cooled to a comparable cool temperature using an average of one tenth the energy that would be required by the CRAC system. Water is used in this cycle, and is only used once (single pass). Based on measurements, all eight of the multi-stage indirect evaporative cooling units consume an average of 3.79 liters per minute (.0 gallon per minute) when in humidification mode, which is most of the winter. Unfortunately, similar measurements were not taken of CRAC water use, although it is expected that the CRACs use and waste more water. The new cooling system consists of a rooftop air handling unit (AHU) powered by a 7.5-kilowatt (0-horsepower) fan motor via a variable frequency drive (VFD), eight multi-stage indirect evaporative cooling air conditioners, and hot aisle containment. Fig. 3 shows a schematic of this system. Note that the AHU is completely responsible for the airside economization by regulating the amount of outdoor air; when the outdoor air is cool enough, the AHU introduces a mixture of this cool air with some return air from the backside of the server racks. Using the VFD and controls, the cooling system operates at 50% or less flow rate 90% of the time. Because of the fan law, when airflow is reduced by 50%, horsepower is reduced by 87.5%. This mixed supply air is introduced into the room and allowed to flow out from beneath the multi-stage indirect evaporative cooling air conditioners to keep the cool areas around 22.2 C (72 F). The multi-stage indirect evaporative cooling air conditioners are only used when the AHU can no longer supply cool enough air to the data center. The multistage indirect evaporative cooling units are located in the room with the servers so that cool PA can be delivered directly to the front side of the servers (see 3 in Fig. 3).

5 Measurement and Data By late 2009, it was determined that the aging CRAC units operating in the NSIDC had to be replaced soon, as they were approaching the end of their useful life. One of them was actually rusting on the inside due to leaking water (these CRACs were equipped with humidification features). The energy consumption of these CRAC units needed to be monitored to determine the power use of the traditional system and calculate the overall efficiency of the data center. Although this could have been approximated by using nameplate ratings and efficiencies, actual energy consumption often varies significantly and can only be determined with proper monitoring equipment. Power meters were initially installed on the two existing CRAC units and the uninterruptible power supply (UPS). Data from these meters was collected over the period of.5 years. Data Center Efficiency Determination The Power Utilization Effectiveness (PUE) for any data center can be calculated using () for any specific point in time (using power in kilowatts), or a time period (using energy in kilowatt-hours) 5. PUE = Total Power () IT Power It is important to point out that PUE is an efficiency metric, and by definition is normalized for different IT loads. Therefore two PUEs are comparable regardless of the IT load. It should be noted that the NSIDC, like most data centers, operates an UPS in order to condition power and to maintain a large battery array for use during a grid electricity outage. And like all power electronic equipment, there are some losses through the UPS. The UPS currently installed at the NSIDC is only 87.7% efficient, which leads to a loss of 2.3% to power conditioning and conversion from AC to DC current needed to charge the backup battery array. This loss is accounted for in () by the difference between UPS Power and IT Power. Total Power = Cooling Power + UPS Power + Misc. Power (2) IT Power = UPS Power * hups Efficiency (3) IT Power refers to the actual power demand from the IT equipment (e.g., servers, network switches, and monitors). The sum of all equipment power uses at any point in time equals IT Power. UPS Power refers to the grid or input power supply required at the UPS. IT and UPS Powers 5 4 Relief air damper Hot Aisle Server Racks 7 Curtain 6 Cold Aisle 3 Working Air Dampers Humidified air to space 2 Integrated server rack cooling fans Outside air damper NSIDC Winter Operation Roof mounted air handler unit Multi-stage indirect evaporative cooling units total of eight Closed 8 Unit Fan ECM variable speed motor 2 3 4 5 6 7 8 ECONOMIZER SUPPLY AIR - The air handler unit mixes outside air with hot aisle return air to provide minimum 75 F economizer supply air temperature. MULTI-STAGE INDIRECT EVAPORATIVE COOLING PRODUCT SUPPLY AIR - Provide either sensible cooled or humidified supply air as a needed to maintain cold aisle temperature and humidity setpoints. MULTI-STAGE INDIRECT EVAPORATIVE COOLING AISLE SUPPLY AIR - A combination of outside air and product air. Setpoint is 72 F and 30 percent minimum relative humidity. HOT AISLE RETURN AIR - Exhaust air off server racks will normally be around 95 F. RELIEF AIR - Relief air damper will modulate in response to space pressure. RETURN AIR DAMPER - Return air damper will control quantity of warn hot aisle return air with outside air in standard economizer mode. OUTSIDE AIR - Outside air is utilized in all cooling modes. When outside air temperature is above 75 F the air handling unit provides 00 percent outside air meeting system requirements in both product and working air. Quantity of outside air increases as cooling units operate in cooling mode. MULTI-STAGE INDIRECT EVAPORATIVE COOLING WORKING AIR EXHAUST - When the outside air temperature is too high to directly cool the space, the cooling units sensibly cool the product air by pulling the economizer air throughout the cooling unit heat exchange. The humid working air is directly exhausted outside without adding any humidity to the space. Figure 3. Schematic of new cooling system for the NSIDC (winter operation).

6 Monthly Average PUE Comparison (May April) 2.5 Average PUE 2.5 0.5 CRAC System (May 0 April ) New System (May October ) 0.5 0 May Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr Month Figure 4. Monthly average PUE comparison. Cooling System Monthly Average Cooling Power IT Load Average Cooling Power (KW) 60 50 40 30 20 0 CRAC Operation New System Tuning New System Operation 0 May Jun Jul Aug Sep Oct Nov Dec 0 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Month Figure 5. Monthly average cooling power comparison between CRAC system and new system.

7 are related directly by the UPS efficiency as in (3). And Total Power includes the extra power consumed by the conversion losses through the UPS as in (2). Inefficiency of CRAC System Because the data center operates 24 hours per day, there is a constant need for cooling; and without airside economization to take advantage of cold winter air, the CRAC units used to operate the compressors 24 hours per day, and at near constant load. Therefore, the PUE of the CRAC system at the NSIDC didn t vary much with outdoor temperature from month to month. The new system allows the cooling power required to maintain the ASHRAE standard for the air state and to decrease more in correlation with the outdoor temperature. When the new system (AHU and multistage indirect evaporative cooling air conditioners) was installed, more power meters had to be installed to account for all power uses of the cooling system (i.e., the AHU fan and the multi-stage indirect evaporative cooling air conditioners). For comparison purposes, monthly average PUEs to date were also calculated for the new system, and more significant seasonal variation is expected. Employing hot aisle containment The legacy data center had a non optimized data equipment layout, see Fig. 2. The NSIDC data center was reconfigured to a hot aisle/cold air configuration by rearranging the server racks and installing curtains, see Fig. 2, right side. With the hot aisle containment, supply air can be delivered at a higher temperature, significantly reducing the cooling energy use. Plus, hot aisle containment allows supply air to be delivered at a lower flow rate since mixing is minimized. Renewable Energy NSIDC is installing a photovoltaic (PV) solar system that will be the first of its kind at the University of Colorado (CU). The panels are locally made (Loveland, CO) using unique, low cost, cadmium telluride (CdTe) thin-film solar panels. The array will use 720 panels producing just over 50 kw. This power level will more than offset the power consumed by the data center when the sun is shining. The solar array also acts as a backup generator as the solar power is automatically diverted during a power failure to a battery array, which runs critical components. The Green Data Center is a research project and NSIDC will be testing this technology to see how it works compared to existing solar installations at the University. The inverters that are being used were manufactured in Denver, CO. Results Fig. 4 shows the reduction in PUE compared to the CRAC system. The CRAC system had an annual average PUE of 2.0. The new system has an average PUE to date (June through October 20) of.55. This average will decrease through about April of 202 because outdoor temperatures in Colorado are generally cool during this time of the year. And since September st, 20, the monthly PUE has been below.35 due to the airside economization of cool fall and winter air temperatures. Note that the PUE in May of 20 was actually higher than it was in 200 because the CRAC system was still operational because the new system was undergoing commissioning and testing. During this time, the complex control sequences for the new system were being tuned. This issue was a single occurrence and should not happen again. Fig. 5 shows the cooling energy comparison for the same total time period. The new system used less than 2.5 kilowatts of power on average for the month of October 20. Compared to October 200 (during which the IT load was only 33% higher and outdoor conditions were very similar), the average cooling power usage was almost 95% less. The average cooling power during the upcoming winter months of November 20 through February 202 should be similar because the winters in Colorado are generally very cool. Note that there was actually a slight increase in cooling power during the winter months of 200 while using the CRAC system. Although it is not immediately clear what caused this, it was most likely due to the CRAC active humidification process that was necessary to maintain the lower humidity limit prescribed by applicable ASHRAE standards. The CRACs used a series of heat lamps (inside each unit) to evaporate water into the airstream. Obviously this is inherently inefficient because the air must be cooled further to account for the heat added by the heat lamps. The new system actually produces very humid air as a byproduct (working air), and humidifying the space when necessary requires no additional energy. The significant energy savings was accomplished while maintaining the 2008 ASHRAE standards for data center air quality, meaning warranties on servers and IT equipment are still valid. Additionally, maintenance costs have been reduced to only air filter changes, which is a simple task. The multi-stage indirect evaporative cooling air conditioners are very simple, without any substantial moving hardware other than a small variable-speed electrically commutated motor (ECM) powering the fan. Outreach To ensure that operating conditions remain within the ASHRAE prescribed envelope, nearly 2 dozen sensors have been installed in the data center to measure temperatures, humidity and power use. To share this information with the public, two datalogging networks were installed in the room to collect, process, and display the data on the NSIDC s public website (http://nsidc.org/about/ green-data-center/). The data can be used to monitor the current performance and the past day s trends in energy uses and temperatures. The NSIDC hopes that this Green Data Center Project can serve as a successful case study for other data centers to look to and learn from.

8 Conclusion The problems facing our planet require different approaches to traditional methods of operation, especially in the rapidly evolving and growing technology industry that consumes extraordinary amounts of resources and indirectly contributes to global climate change. The solutions to these problems do not necessarily require extraordinary complexity nor need be expensive. This Green Data Center Project was completed for a modest sum relative to the substantial energy savings achieved. Acknowledgment NREL would like to thank the NSIDC for implementing and monitoring this project, and the RMH Group for their innovative design and expert advice throughout the development of this project. Working together, the team was able to develop the low-energy design that has transformed the NSIDC into one of the most efficient data centers in the United States. The NSIDC DAAC is operated with NASA support under Contract NNG08HZ07C to the University of Colorado. The Green Data Center cooling system was funded by an ARRA grant to upgrade existing research facilities from the National Science Foundation Grant CNS 0963204. References J. Koomey, Growth in data center electricity use 2005 to 200, Analytics Press, Table, p. 20, August, 20. 2 Carmichael, Ben, Data centers set to outpace airlines in GHG emissions, OnEarth Magazine, May 2, 2008. http:// www.onearth.org/blog/whats-happeningon-earth/data-centers-set-to-outpaceairlines-in-ghg-emissions. 3 ASHRAE publication, 2008 ASHRAE environmental guidelines for datacom equipment, pp. -8, 2008. 4 A technical description of the Maisotsenko cycle, Idalex Technologies, 2003, http://idalex.com/technology/ how_it_works_-_technological_perspective.htm. 5 Verdun, Gary, et al, The Green Grid metrics: data center infrstructure efficiency (DCiE) detailed analysis, The Green Grid, 2000. For more information and resources, visit the FEMP website at femp.energy.gov. Printed with a renewable-source ink on paper containing at least 50% wastepaper, including 0% post consumer waste. DOE/GO-0202-3509 May 202