Reducing the Annual Cost of a Telecommunications Data Center



Similar documents
How To Improve Energy Efficiency Through Raising Inlet Temperatures

Analysis of the UNH Data Center Using CFD Modeling

Analysis of data centre cooling energy efficiency

How to Build a Data Centre Cooling Budget. Ian Cathcart

Benefits of. Air Flow Management. Data Center

IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT

Improving Data Center Energy Efficiency Through Environmental Optimization

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Fundamentals of CFD and Data Center Cooling Amir Radmehr, Ph.D. Innovative Research, Inc.

Data Center 2020: Delivering high density in the Data Center; efficiently and reliably

Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions

Saving energy in the Brookhaven National Laboratory Computing Center: Cold aisle containment implementation and computational fluid dynamics modeling

How To Improve Energy Efficiency In A Data Center

Using CFD for optimal thermal management and cooling design in data centers

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers

Impacts of Perforated Tile Open Areas on Airflow Uniformity and Air Management Performance in a Modular Data Center

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

Air, Fluid Flow, and Thermal Simulation of Data Centers with Autodesk Revit 2013 and Autodesk BIM 360

Airflow and Cooling Performance of Data Centers: Two Performance Metrics

Managing Data Centre Heat Issues

Energy Performance Optimization of Server Room HVAC System

Data Centre Energy Efficiency Operating for Optimisation Robert M Pe / Sept. 20, 2012 National Energy Efficiency Conference Singapore

How To Run A Data Center Efficiently

Data Center Components Overview

Data Centers: How Does It Affect My Building s Energy Use and What Can I Do?

Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001

DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences

Sealing Gaps Under IT Racks: CFD Analysis Reveals Significant Savings Potential

Data Center Cooling & Air Flow Management. Arnold Murphy, CDCEP, CDCAP March 3, 2015

Measure Server delta- T using AUDIT- BUDDY

BRUNS-PAK Presents MARK S. EVANKO, Principal

Data Center Design Guide featuring Water-Side Economizer Solutions. with Dynamic Economizer Cooling

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Green Data Centre Design

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Environmental Data Center Management and Monitoring

Effect of Rack Server Population on Temperatures in Data Centers

Control of Computer Room Air Handlers Using Wireless Sensors. Energy Efficient Data Center Demonstration Project

The New Data Center Cooling Paradigm The Tiered Approach

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

Using CFD for Data Center Design and Analysis

DataCenter 2020: first results for energy-optimization at existing data centers

Greening Commercial Data Centres

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

Benefits of Cold Aisle Containment During Cooling Failure

The CEETHERM Data Center Laboratory

A Comparative Study of Various High Density Data Center Cooling Technologies. A Thesis Presented. Kwok Wu. The Graduate School

Energy Efficiency and Effective Equipment Cooling In Data Centers. Presentation at ASHRAE Summer Meeting Quebec City, Canada, June 27, 2006

IT White Paper MANAGING EXTREME HEAT: COOLING STRATEGIES FOR HIGH-DENSITY SYSTEMS

Center Thermal Management Can Live Together

Managing Cooling Capacity & Redundancy In Data Centers Today

Rack Hygiene. Data Center White Paper. Executive Summary

How Row-based Data Center Cooling Works

Specialty Environment Design Mission Critical Facilities

Challenges In Intelligent Management Of Power And Cooling Towards Sustainable Data Centre

White Paper. Data Center Containment Cooling Strategies. Abstract WHITE PAPER EC9001. Geist Updated August 2010

Case Study: Innovative Energy Efficiency Approaches in NOAA s Environmental Security Computing Center in Fairmont, West Virginia

Power and Cooling for Ultra-High Density Racks and Blade Servers

APC APPLICATION NOTE #92

Data center upgrade proposal. (phase one)

Modeling Rack and Server Heat Capacity in a Physics Based Dynamic CFD Model of Data Centers. Sami Alkharabsheh, Bahgat Sammakia 10/28/2013

Enclosure and Airflow Management Solution

How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems

GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY. Public Sector ICT Special Working Group

An Introduction to Cold Aisle Containment Systems in the Data Centre

DOE FEMP First Thursday Seminar. Learner Guide. Core Competency Areas Addressed in the Training. Energy/Sustainability Managers and Facility Managers

Data Center Energy Profiler Questions Checklist

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

International Telecommunication Union SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF TELECOMMUNICATION CABLES IN PUBLIC NETWORKS

11 Top Tips for Energy-Efficient Data Center Design and Operation

Data Center Temperature Rise During a Cooling System Outage

Data Center Temperature Rise During a Cooling System Outage

Rack Cooling Effectiveness in Data Centers and Telecom Central Offices: The Rack Cooling Index (RCI)

THE GREEN DATA CENTER

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

AEGIS DATA CENTER SERVICES POWER AND COOLING ANALYSIS SERVICE SUMMARY

Prediction Is Better Than Cure CFD Simulation For Data Center Operation.

AIR-SITE GROUP. White Paper. Green Equipment Room Practices

DESIGN ASPECT OF ENERGY EFFICIENT DATA CENTER

APC APPLICATION NOTE #112

Blade Servers and Beyond: Adaptive Cooling for the Next Generation of IT Systems. A White Paper from the Experts in Business-Critical Continuity

Energy Efficiency in New and Existing Data Centers-Where the Opportunities May Lie

Top-Level Energy and Environmental Dashboard for Data Center Monitoring

Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc.

Strategies for Deploying Blade Servers in Existing Data Centers

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED.

Energy and Cost Analysis of Rittal Corporation Liquid Cooled Package

Data Center Power Consumption

Statement Of Work Professional Services

Energy Recovery Systems for the Efficient Cooling of Data Centers using Absorption Chillers and Renewable Energy Resources

Transcription:

Applied Math Modeling White Paper Reducing the Annual Cost of a Telecommunications Data Center By Paul Bemis and Liz Marshall, Applied Math Modeling Inc., Concord, NH March, 2011 Introduction The facilities managers for a large internet service provider have known for a while that one of their data centers is over-cooled. Over-cooling translates into unnecessary energy consumption and expense, so the managers knew that some changes to the data center were needed. Several options were possible, such as shutting down one or more of the cooling units. Many questions arose, however. For example, what would be the consequences of shutting down a CRAC? Would it be possible to shut down two? If so, which two? Could the supply temperatures be increased? To answer these questions, the operators decided to use computational fluid dynamics (CFD), a tool that uses airflow predictions to demonstrate how effectively the cooling air reaches and removes heat from - the equipment in the room. Using CFD-based modeling techniques for quantifying the efficiency of the data center, different energy-saving strategies can be compared before physical changes to the room are made. Problem Description The CFD modeling is done using CoolSim software from Applied Math Modeling. The raised-floor data center is 4720 sq. ft. in size (80 ft x 59 ft) and makes use of a ceiling plenum return. The 2 ft supply plenum, 15 ft Figure 1: Isometric view of the room geometry, showing the rack rows (pink and gray), CRACs (blue tops), perforated floor tiles (white) and overhead ceiling grilles (green) 2011 Applied Math Modeling Inc. 1

Figure 2: Rack heat loads, which range from a low of 10W (dark blue) to a high of 7.8 kw (red); while there are only two 10W racks, there are several 50W racks, which also appear dark blue room height, and 5 ft ceiling plenum combine to form a space that is 22 ft high. (See Figure 1.) The data center contains ten rows of equipment with either 17 or 21 racks per row. The heat loads in the racks vary from 10W up to 7.8 kw, as shown in Figure 2. The total IT heat load in the room is 363 kw with a density of 6.9 W/sq. ft. Five Liebert DS105AU CRACs are positioned on opposing walls for a total of ten CRACs in the room. These direct expansion (DX) cooling units are controlled in zones, each of which consists of two opposing CRACs (Figure 3). The data center was originally designed for a 1 MW heat load, but in its current use, the IT load is only about one-third of that value (363 kw). Assuming a 20 F temperature rise across all racks, 57,020 CFM of cooling air is needed for the present heat load. Each CRAC delivers 14,500 CFM, so with all ten CRACs operating, a total of 145,000 CFM is generated, which is almost three times the needed amount. Thus in normal operating mode, two zones are disabled, so that only six of the ten CRACs are in use. The disabled cooling zones, 2 and 4, are shown in Figure 3. For this configuration, the six active CRACs supply 87,000 CFM of cooling air, which is about 50% more than required for the heat load. The total cooling capacity of the six CRACs exceeds the heat load by approximately the same amount. Measurements of supply air temperatures place the range between 50 F and 68 F. Return temperatures are also available for comparison with values predicted by the CFD simulation. Once a CFD model is created and validated, alternative energy optimization scenarios can be investigated, including disabling additional CRACs, hot or cold aisle containment, and modification to the CRAC cooling parameters. In this study, the first and third of these options will be considered. Preliminary Results for the Baseline Case The first CFD model created for this study (the baseline case) corresponds to the data center operating in normal mode, with six CRACs operational, as shown in Figure 3. Boundary conditions for the simulation in- 2011 Applied Math Modeling Inc. 2

Zone # 1 2 3 4 5 CRAC #1 3 5 7 9 5% in all cases. A validation of the preliminary model, such as this, is an important step if modifications are to be made. A demonstration that the base model accurately captures the physics to within an acceptable margin of error means that it can be used to correctly predict trends if one or more changes are made. 2 Figure 3: Five cooling zones, each of which consists of a pair of opposing CRACs; under normal operating conditions, Zones 2 and 4 are shut down clude the heat load and flow rate associated with each rack and the supply temperature and flow rate associated with each CRAC. The measured supply and return temperatures are shown in Table 1, along with the predicted return temperatures from the CFD model. In all but one case, the predicted temperatures are below the measured values. Often, when the CFD model under-predicts the return temperature on every CRAC, it means that either the heat loads are under-represented or the CRAC flowrates are too high. In this data center, it could be one of these factors or a combination of both, but the effect is small, since the error is below 4 6 CRAC # Contours of rack inlet temperature for the baseline case are shown in Figure 4. The temperatures all fall below the ASHRAE recommended maximum value of 80.6 F. The maximum rack inlet temperature is a good metric to follow when comparing cooling strategies. For an over-cooled data center, however, the minimum rack inlet temperature is also important to follow. According to the ASH- RAE guidelines, the rack inlet temperature 2011 Applied Math Modeling Inc. 3 8 Measured Supply Temperature (F) 10 Measured Return Temperature (F) Predicted Return Temperature (F) Error (%) 1 51.6 70.0 67 4.3 2 50.7 69.8 67 4.0 5 53.5 71.2 69 3.1 6 53.8 71.6 69 3.6 9 67.9 75.4 74 1.9 10 62.4 73.7 74 2.3 Table 1: Measurements of supply and return temperature and predicted return temperature for the baseline case; the error in the predicted return temperature is under 5% for all CRACs

should not go below 64.4 F, although the allowed minimum value is 59 F. For the baseline case, at least half of the racks have inlet temperatures that are too cold. Data Center Metrics PUE and DCIE A number of metrics have been defined in recent years that can be used to gauge the efficiency of a data center. Metrics can also be used to test whether changes to the data center bring about reduced (or increased) power demands. One of the most popular metrics is the Power Utilization Effectiveness, or PUE, defined as the ratio of total facility power to total IT power. Total Facility Power PUE Total IT Power The total facility power includes that needed to run the CRACs (chillers and fans), IT equipment, battery backup systems, lighting, and any other heat-producing devices. Thus PUE is always greater than 1, but values that are close to 1 are better than those that are not. A typical value is 1.8, a good value is 1.4, and an excellent value is 1.2. COP The largest contributor to the total facility power is the cooling system, comprised of the heat exchangers (chillers, condensers and cooling fluid pumps, for example) and fans. The heat exchanger portion of the CRAC is a heat pump, whose job it is to move heat from one location (inside the room) to another Figure 4: Rack inlet temperatures for the baseline case, in which the CRACs in Zones 2 and 4 are inactive (1) (outside). Heat pumps are rated by their coefficient of performance, or COP. The COP is the ratio of the heat moved by the pump to the work done by the pump to perform this task. The work done by the pump encompasses the heat exchanger work and does not include the CRAC fans. The COP can also be expressed as a power ratio, making use of the rate at which heat is moved (in Watts, say) or work is done (again, in Watts). COP Heat Moved Work Done Using more practical terms, the COP is the ratio of the total room heat load to the power needed to run the chillers, condensers and other heat rejection equipment. For data center cooling equipment, COP values range from 2 to 5, with larger numbers corresponding to better heat pumps. Note that an alternative definition of COP could be made for the data center as a whole, rather than just for the heat rejection system. In this alternative 2011 Applied Math Modeling Inc. 4 (2)

definition, the work done would include the power used to run the CRAC fans. For the purposes of this paper, the traditional definition of COP is used. RCI HI n T i TR i 1 1 NT A_ HI T _ HI R _ HI x100% (4) Return Temperature Index TM The Return Temperature Index, a trademark of ANCIS Inc. (www.ancis.us), is a percentage based on the ratio of the total demand air flow rate to the total supply air flow rate. Total Demand Air Flow Rate RTI (3) Total Supply Air Flow Rate Alternatively, it can be computed using the ratio of the average temperature drop across the CRACs to the average temperature rise across the racks. In either case, a value of 100% indicates a perfectly balanced airflow configuration, where the supply equals the demand. Values with RTI < 100% have excess cooling airflow, so short-circuiting across the CRACs exists. Values with RTI > 100% have a deficit of cooling air, so there is recirculation from the rack exhausts to the rack inlets. It is best to have RTI values that are less than, but close to 100%. Rack Cooling Index The Rack Cooling Index, a registered trademarked of ANCIS Inc., is computed using the average number of degrees that the rack inlet temperature falls above (or below) the ASHRAE recommended temperature range (64.4 F to 80.6 F). One index is defined for temperatures above the range (RCI HI ) and another for temperatures below the range (RCI LO ). For the high side: where T R_HI is the ASHRAE recommended maximum temperature (80.6 F) T A_HI is the ASHRAE allowed maximum temperature (90 F) T i is the maximum inlet temperature on the i th rack n is the number of racks with T i > T R_HI N is the total number of racks in the sample The index on the low side is similarly defined: RCI LO n T R _ LO Ti i 1 1 x100% (5) NT R _ LO TA_ LO where T R_LO is the ASHRAE recommended minimum temperature (64.4 F) T A_LO is the ASHRAE allowed minimum temperature (59 F) T i is the minimum inlet temperature on the i th rack n is the number of racks with T i < T R_LO N is the total number of racks in the sample Ideally, no racks should be outside the recommended range, so the ideal value is 100% for both indices. Values between 90% and 100% are in the acceptable to good range, while values under 90% are considered poor. 2011 Applied Math Modeling Inc. 5

Metrics for the Baseline Data Center Using the metrics defined above, the baseline data canter configuration can now be evaluated using a combination of measurements and CFD results. Because the cooling system is controlled in five separate zones, the facility managers have been able to measure the electric power needed to run the heat rejection system (the CRAC power minus the fans). The measured value, 269.1 kw, is a snapshot of one day s power demand for the three normally functioning zones. They have also determined that each CRAC uses 8kW to run its fan, so the total CRAC fan power is 48 kw. Combining these, the total measured cooling power is 269.1+48= 317.1kW. The total rack heat load in the room is 363 kw, and this includes PDUs, which are rack mounted. If 5% of this value is assumed for additional support infrastructure (lights, etc.), the total IT heat load in the room is 363+18.2=381.2kW. Taking the most conservative approach described above, the CRAC fan power will be included in the room heat load. Assuming that all of the CRAC fan power will eventually be converted to heat, the total room heat load becomes 381.2 + 48 = 429.2kW. The ratio of the total room heat load to the power needed to run the heat rejection system (269.1 kw) is the COP: 4292. COP 159. (6) 2691. This value is low, indicating that the data center could support more equipment for the amount of power being delivered to the cooling system. Alternatively, it suggests that shutting down one or more of the CRACs is an option to be considered. To calculate the PUE, the total facility power is needed. This is simply the total cooling power (317.1 kw) plus the total room heat load (429.2 kw), or 746.3 kw. Dividing the total facility power by the total IT heat load (382.1 kw), the PUE is: PUE 746.3 381.2 1.96 (7) The return temperature index can be computed using the boundary conditions used for the CRACs and IT equipment. The total supply air flow from the CRACs is 87,000 CFM. The demand air flow from the IT load (363 kw) is 57, 020 CFM. Assuming the additional 5% of heat load, the demand air flow should be adjusted by 5%, bringing the total to 59,871 CFM. The ratio of the adjusted demand air flow to the supply air flow is: 59,871 RTI 100 69% (8) 87,000 Consistent with earlier calculations, the RTI also suggests that the data center is overcooled. The degree to which it is overcooled is indicated by the high and low rack cooling indices. An analysis of the CFD results using Eq. (4) and (5) yields RCI 100% (9) HI 2011 Applied Math Modeling Inc. 6

and Modifying the Design RCI 0% (10) LO A value of 100% for RCI HI means that no racks have inlet temperatures above the recommended maximum value. A value less than 0 for RCI LO indicates that the average number of degrees below the recommended minimum value is greater than the number of degrees between the recommended and allowable minimum values. In other words, the inlet temperatures on the whole are much too cold. The metrics calculated for the baseline case are summarized in Table 2. Estimating the Baseline Data Center Costs Before considering changes to the data center, the cost of running the facility in its present state is estimated. To determine the cost, the total facility power is needed along with the cost of electricity. Using 746.3kW as the total facility power and $0.09 as the cost per kwh, the estimated annual cost of running the data center is about $588,300, which is within 10% of the actual cost. While this value is not based on the CFD analysis, a similar calculation can be done for proposed modifications to the data center. Thus while a CFD analysis can be used to judge the efficacy of each design, the companion energy calculation can be done to estimate the cost savings. Disabling Zones As a first step, each of the three active zones is disabled in a series of trials. These trials are solved concurrently on separate nodes at CoolSim s remote simulation facility (RSF) using the CRAC Failure Analysis model. Trial 1 has Zones 1, 2, and 4 disabled, Trial 2 has Zones 2, 3, and 4 disabled, and Trial 3 has Zones 2, 4, and 5 disabled. For each of these trials, the maximum rack inlet temperature is, at most, 75 F, well below the ASH- RAE recommended value of 80.6 F. Trial 1 has the highest rack inlet temperature, and contours for all of the racks for this case are shown in Figure 5. Note that when the left two zones are shut down, the temperature on that side of the room increases. Pathlines of the supply air in the plenum (Figure 6) show that jets from the opposing CRACs collide and deflect the cooling air to the left side of the room, keeping the rack temperatures in range. These trials illustrate that the simplest Figure 5: Contours of rack inlet temperature for Trial 1 of the baseline case, where Zones 1, 2, and 4 are shut down 2011 Applied Math Modeling Inc. 7

Figure 6: Pathlines of supply air in the plenum for Trial 1 of the baseline case, where Zones 1, 2, and 4 are shut down there are still no racks with temperatures above the recommended value. The RCI LO index remains below 0, but only slightly. Thus while the rack inlet temperatures are not as cold as before, they are still colder than they need to be. Owing to the drop in the total facility power, the cost to run the data center also drops. The new annual cost is estimated to be $492,400, representing a savings of about $95,900. These results are summarized in Table 2. modification to the data center - shutting down one of the zones - will not adversely impact the equipment. The data center metrics computed for Trial 1 show a great deal of improvement in energy efficiency and an associated cost savings. Because the amount of power needed to run the cooling system and CRAC fans is twothirds of the earlier value, the total cooling power is reduced to 211.4 kw and the COP is increased to 2.3. The total facility power is reduced to 624.6 kw, leading to a decrease in the PUE to 1.64. The rack temperature index increases from 69% to 103%. Ideally, the RTI should be below 100%, but because an additional 5% of infrastructure equipment is included in the total heat load, the demand air flow rate is assumed to have a corresponding increase, which may be too much. (Additional heat from overhead lamps may be lost through the ceiling, for example.) The RCI HI index remains at 100%, indicating that Increasing the Supply Temperatures One of the dominant factors in reducing data center energy consumption is air supply temperature. For every 1.8 F increase in supply air temperature, the efficiency of the heat pump improves by 3.5% (Design Considerations for Datacom Equipment Centers, Atlanta: ASHRAE, 2005). Further, by increasing the supply air temperature, the window of free cooling opens, since air-side or water-side economizers can be used on more days of the year. Economizers improve the efficiency of the cooling system by making use of the reservoir of outside air in the heat rejection process. If the temperature difference between the supply air and outside air is reduced, the chillers and condensers in the heat rejection system can be augmented or even replaced by economizers, resulting in huge gains in the COP. Because the data center is initially overcooled, it is a prime candidate for increased supply temperature. Thus, as a second modi- 2011 Applied Math Modeling Inc. 8

fication, all of the supply temperatures are increased to 65 F. Recall that in the original configuration, measured temperatures were used for the CRAC boundary conditions and all but two were below 60 F. Increasing all of the supply temperatures to 65 F should Baseline Case Trial 0 Baseline Case Trial 1 IT Heat Load (kw) 363 363 Total IT Heat Load (kw) 381.2 381.2 CRAC Cooling Power (kw) 269.1 179.4 CRAC Fan Power (kw) 48 32 Total Room Heat Load (kw) 429.2 413.2 Total Cooling Power (kw) 317.1 211.4 Total Facility Power (kw) 746.3 624.6 COP 1.59 2.30 PUE 1.96 1.64 Total Supply Air Flow (CFM) 87,000 58,000 Total Demand Air Flow (CFM) 59,871 59,871 RTI (%) 69 103 RCI HI (%) 100 100 RCI LO (%) <0 <0 Cost of Electricity ($/kw-hr) 0.09 0.09 Annual Cost ($) 588,300 492,400 Savings ($) 95,900 Table 2: Data center metrics comparing Trials 0 and 1 for the baseline case in which Zones 2 and 4 and Zones 1, 2, and 4 are shut down, respectively 2011 Applied Math Modeling Inc. 9

Figure 7: Rack inlet temperatures corresponding to 65 F CRAC supply temperatures for Trial 0 where Zones 2 and 4 are disabled alleviate the problems suggested by the RCI LO index and improve the COP, which will save a significant amount of power. To properly assess such a proposed change, a CFD analysis is needed to determine if hot spots will form, impacting the performance at the upper end of the recommended range. Contours of the rack inlet temperatures for Trial 0 of this scenario with Zones 2 and 4 disabled are shown in Figure 7. The minimum and maximum values for the contours are shown in the key on the left. Because the range (65 F to 78 F) falls with the ASHRAE recommended range (64.4 F to 80.6 F), all racks satisfy the condition and the RCI HI and RCI LO values are both 100%. The average supply temperature for the baseline case with only two zones disabled is 57 F. Increasing the average supply temperature to 65 F (an 8 F increase) corresponds to a 15% increase in the COP, so the new value for this configuration is 1.84. The previous analysis showed, however, that disabling an additional zone results in potential savings of about $95,000 a year. Thus a CRAC failure analysis should be done with the 65 F supply temperature boundary condition to make sure that the rack inlet temperatures aren t too high if one of the zones is disabled. In Figure 8, the rack inlet temperatures are shown for the trial where the maximum rack inlet temperature is highest. It is again Trial 1 in which Zones 1, 2, and 4 are disabled. Based on the maximum value shown in the figure, some of the racks have temperatures above the ASHRAE recommended maximum of 80.6 F. A calculation of RCI HI supports this finding, with a value of 97.3%. RCI values between 95% and 100% are considered good for a data center. The value suggests that the average deviation in temperature above the recommended value is small, however, and this is indeed borne out by the detailed results. Indeed, all racks have inlet temperatures that are well below the ASHRAE allowable maximum value (90 F). As expected, RCI LO has a value of 100%. With 60 F as the average supply temperature for Trial 1 in the baseline case, the increase in supply temperature for this case (5 F) corresponds to an increase in the COP to 2.53. Increasing the supply temperatures to 68 F results in RCI HI and RCI LO indices of 100% for Trial 0. Furthermore, the COP increases to 1.94. For Trial 1, RCI LO remains at 100%, but RCI HI drops to 84%. Even so, none of 2011 Applied Math Modeling Inc. 10

the rack inlet temperatures goes above the ASHRAE allowable value. The COP increases to 2.66 for this scenario. Figure 8: Rack inlet temperatures corresponding to 65 F CRAC supply temperatures for Trial 1 where Zones 1, 2, and 4 are disabled The total facility power can be computed for each of these cases, and from it, the annual cost of running the data center. A summary of COP values and associated costs for the various trials discussed in this section is presented in Table 3. Comparison of the Trail 0 results shows that between $28,500 and $37,000 can be saved by increasing the supply temperatures. Comparison of the Trial 1 results shows that an additional Baseline Trial 0 Supply 65 F Trial 0 Supply 68 F Trial 0 Average T SUPPLY ( F) 57 65 68 COP 1.59 1.84 1.94 Total Facility Power (kw) 746.3 710.0 698.8 Annual Cost ($) 588,300 559,800 551,000 Savings ($) 28,500 37,300 Baseline Trial 1 Supply 65 F Trial 1 Supply 68 F Trial 1 Average T SUPPLY ( F) 60 65 68 COP 2.3 2.53 2.66 Total Facility Power (kw) 624.6 608.7 600.4 Annual Cost ($) 492,400 479,900 473,400 Savings ($) 12,500 19,000 Table 3: A comparison of COP and predicted annual costs resulting from increased CRAC supply temperatures; savings of at least $28,000 can be achieved if 3 of the 5 zones are operational (Trial 0, top) and at least $12,000 if one additional zone is disabled (Trial 1, bottom) 2011 Applied Math Modeling Inc. 11

$12,500 to $19,000 can be saved by disabling one of the zones. Applying the savings computed in Tables 2 and 3, the annual cost of the data center could be cut by at least $110,000 by disabling one of the zones and increasing the supply temperature to 65 F. Summary Computational fluid dynamics and data center metrics have been used to study a data center for which a number of measurements were available. The ten CRACs in the room are controlled using five zones, with two CRACs in each zone. Because the heat load is less than the original planned value, the data center currently operates with only three of the five zones active. Even so, the normal operating configuration is generating temperatures that are colder than needed. CFD was used to test alternative scenarios with additional zones disabled and with increased supply temperatures. For each of the design modifications, energy calculations were performed to estimate the total facility power usage and corresponding cost. The results of the studies show that one additional zone can be disabled and the supply temperatures can be raised slightly. With these changes, the rack inlet temperatures will remain well within the ASHRAE allowable temperature range and the annual cost of running the facility will be reduced by about $100,000. 2011 Applied Math Modeling Inc. 12