Benefits of Cold Aisle Containment During Cooling Failure

Similar documents
How To Improve Energy Efficiency Through Raising Inlet Temperatures

Modeling Rack and Server Heat Capacity in a Physics Based Dynamic CFD Model of Data Centers. Sami Alkharabsheh, Bahgat Sammakia 10/28/2013

Air, Fluid Flow, and Thermal Simulation of Data Centers with Autodesk Revit 2013 and Autodesk BIM 360

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

Data center upgrade proposal. (phase one)

Benefits of. Air Flow Management. Data Center

- White Paper - Data Centre Cooling. Best Practice

Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER

Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions

Airflow Simulation Solves Data Centre Cooling Problem

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Thermal Mass Availability for Cooling Data Centers during Power Shutdown

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Data Center Components Overview

DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences

Great Lakes Data Room Case Study

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

Element D Services Heating, Ventilating, and Air Conditioning

Improving Data Center Energy Efficiency Through Environmental Optimization

Managing Cooling Capacity & Redundancy In Data Centers Today

BRUNS-PAK Presents MARK S. EVANKO, Principal

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

Driving Data Center Efficiency Through the Adoption of Best Practices

A Comparative Study of Various High Density Data Center Cooling Technologies. A Thesis Presented. Kwok Wu. The Graduate School

COMPARATIVE ANALYSIS OF WIDELY USED AIR CONTAINMENT SYSTEMS IN COMMERCIAL DATA CENTERS FOR MAXIMUM COOLING PERFORMANCE KASTURI RANGAN RAJAGOPALAN

Enclosure and Airflow Management Solution

Power and Cooling for Ultra-High Density Racks and Blade Servers

Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED.

Introducing Computational Fluid Dynamics Virtual Facility 6SigmaDC

Sealing Gaps Under IT Racks: CFD Analysis Reveals Significant Savings Potential

Choosing Close-Coupled IT Cooling Solutions

Thermal Storage System Provides Emergency Data Center Cooling

The New Data Center Cooling Paradigm The Tiered Approach

Data Center Temperature Rise During a Cooling System Outage

Data Center 2020: Delivering high density in the Data Center; efficiently and reliably

EXPERIMENTAL CHARACTERIZATION OF TRANSIENT TEMPERATURE EVOLUTION IN A DATA CENTER FACILITY

IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT

Data Center Temperature Rise During a Cooling System Outage

How To Improve Energy Efficiency In A Data Center

Rack Hygiene. Data Center White Paper. Executive Summary

Center Thermal Management Can Live Together

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001

I-STUTE Project - WP2.3 Data Centre Cooling. Project Review Meeting 8, Loughborough University, 29 th June 2015

APC APPLICATION NOTE #112

Measure Server delta- T using AUDIT- BUDDY

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers

Prediction Is Better Than Cure CFD Simulation For Data Center Operation.

Using Simulation to Improve Data Center Efficiency

DataCenter 2020: first results for energy-optimization at existing data centers

Using Simulation to Improve Data Center Efficiency

DOE FEMP First Thursday Seminar. Learner Guide. Core Competency Areas Addressed in the Training. Energy/Sustainability Managers and Facility Managers

Boosting the Energy efficiency of a Server Room: experimental + virtual approach.

News in Data Center Cooling

Small Data / Telecommunications Room on Slab Floor

Cooling a hot issue? MINKELS COLD CORRIDOR TM SOLUTION

Data Center Energy Profiler Questions Checklist

Effect of Rack Server Population on Temperatures in Data Centers

How To Run A Data Center Efficiently

Using CFD for optimal thermal management and cooling design in data centers

Technology Corporation

Case Study: Innovative Energy Efficiency Approaches in NOAA s Environmental Security Computing Center in Fairmont, West Virginia

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers

Ten Steps to Solving Cooling Problems Caused by High- Density Server Deployment

Blade Servers and Beyond: Adaptive Cooling for the Next Generation of IT Systems. A White Paper from the Experts in Business-Critical Continuity

Specialty Environment Design Mission Critical Facilities

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Elements of Energy Efficiency in Data Centre Cooling Architecture

Data Center Power Consumption

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

Strategies for Deploying Blade Servers in Existing Data Centers

Data Center Cooling & Air Flow Management. Arnold Murphy, CDCEP, CDCAP March 3, 2015

Managing Data Centre Heat Issues

IT White Paper MANAGING EXTREME HEAT: COOLING STRATEGIES FOR HIGH-DENSITY SYSTEMS

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

W H I T E P A P E R. Computational Fluid Dynamics Modeling for Operational Data Centers

APC APPLICATION NOTE #92

Adaptive Cooling in Data Centers. Silicon Valley Leadership Group Energy Efficient Data Center Demonstration Project October, 2009

How to Build a Data Centre Cooling Budget. Ian Cathcart

Reducing the Annual Cost of a Telecommunications Data Center

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc.

HPPR Data Center Improvements Projects Towards Energy Efficiency (CFD Analysis, Monitoring, etc.)

Analysis of the UNH Data Center Using CFD Modeling

Cooling Audit for Identifying Potential Cooling Problems in Data Centers

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

Rittal Liquid Cooling Series

Energy Impact of Increased Server Inlet Temperature

Alcatel-Lucent Modular Cooling Solution

Data Centers: How Does It Affect My Building s Energy Use and What Can I Do?

Data Center Cooling A guide from: Your partners in UPS Monitoring Systems

FNT EXPERT PAPER. // Data Center Efficiency AUTHOR. Using CFD to Optimize Cooling in Design and Operation.

BICSInews. plus. july/august Industrial-grade Infrastructure. + I Say Bonding, You Say Grounding + Data Center Containment.

International Telecommunication Union SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF TELECOMMUNICATION CABLES IN PUBLIC NETWORKS

STULZ Water-Side Economizer Solutions

GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY. Public Sector ICT Special Working Group

Thermal Monitoring Best Practices Benefits gained through proper deployment and utilizing sensors that work with evolving monitoring systems

TRANSIENT AIR TEMPERATURE MEASUREMENTS IN A DATA CENTER

Data Center Precision Cooling: The Need For A Higher Level Of Service Expertise. A White Paper from the Experts in Business-Critical Continuity

Transcription:

Benefits of Cold Aisle Containment During Cooling Failure Introduction Data centers are mission-critical facilities that require constant operation because they are at the core of the customer-business relationship for industries such as finance, healthcare, and telecommunications. A major challenge in keeping this operation uninterrupted arises with unplanned failures or power outages. According to a national survey on data center outages conducted by Ponemon Institute in 2010, 95% of the 453 surveyed data center operators experienced an unplanned outage over the two-year period. The average number of complete data center shutdowns was 2.48, with the average downtime event lasting 107 minutes per outage. Also, the survey results showed that CRAC (Computer Room Air Conditioner) failure is one of the top root causes of data center outages. The majority of data centers are designed with the IT equipment on the Uninterrupted Power Supply (UPS)-backed power and with elements of the supporting infrastructure such as cooling systems on the generator-backed power. In the event of a power failure, the IT equipment on the UPS system keeps running and keeps dissipating heat into the room. Generators typically come online within 10-20 seconds, allowing the cooling equipment to quickly come back online and keep the IT equipment in safe operating condition. However, in the event of cooling equipment failure (e.g. mechanical breakdown) it may take several minutes to bring the cooling system back online. Temperatures within a data center can rise very quickly and significantly during this period. Therefore, the main issue to address when discussing cooling equipment failure in data centers is how much time is available before the IT equipment starts shutting down due to overheating (commonly referred to as ride-through time). This concern becomes more relevant when the subject data center has a Cold Aisle Containment (CAC) system. Containment has been a growing trend in the data center industry and is an important energy saving strategy for data center optimization. Cold aisle containment provides a physical separation between the cold air and the hot exhaust air by enclosing the cold aisle, preventing hot air recirculation and cold air bypass. The common perception is that during a cooling system failure a CAC system will provide a short ridethrough time because the IT equipment can pull cool air from a small volume (i.e. the contained cold aisle). Another perception is that a standard HA/CA (Hot Aisle/Cold Aisle) configuration would provide a longer ride-through time because the IT equipment can pull cool air from a larger volume (i.e. the room). However, the test data from an experimental study conducted by Data Center 2020 showed a CAC configuration providing a longer ride-through time than a standard HA/CA configuration. The Data Center RKTB01--WW-ENG, Rev.0, 01/2013 Page 1 of 13

2020 article explained that the lack of containment allowed server recirculation to occur very quickly, resulting in a shorter ride-through time in the case of a HA/CA configuration. The research team at Panduit found the brief explanation provided for the behavior of the CAC configuration to be insufficient and performed a series of experiments to study the effect of cold aisle containment on the data center temperatures during a cooling failure. This document summarizes the findings of the study, which reveal behavior that disputes the perception that having a CAC system always shortens the IT equipment ride-through time. On the contrary, the test data shows that a CAC system improves the IT equipment ride-through time. In addition, this document compares the results from a thermodynamic based analytical tool and a transient CFD model of the lab space to the test data, highlighting the summary results from the analytical tool and the CFD simulation. Thermal Test Lab Setup The tests were conducted at the Panduit Thermal Lab in the Greater Chicago area. The lab space is dedicated to testing the performance of various data center products offered by Panduit such as cabinets, CAC, chimney cabinets, blanking panels, power outlet units, etc. The lab is cooled using a 20-ton Liebert Computer Room Air Conditioning (CRAC) unit equipped with two digital scroll compressors and Variable Frequency Drives (VFD) on both the evaporator and the condenser fans. The CRAC unit is connected to an air cooled condensing unit on the roof, has a nominal airflow rate of 9000cfm and is equipped with a built-in control module. The test bed consists of eight cabinets arranged in two rows of four cabinets each, enclosing a common 6 feet wide cold aisle. Each row contains one of Panduit s 24-inch wide cabinets and three of Panduit s 28- inch wide cabinets. Panduit s Net-Contain cold aisle containment is installed on these cabinets. The CRAC unit also provides cold air to the contained cold aisle through twelve 25% open tiles, each 2 x 2 in size. Figure 1 illustrates the physical setup of the CAC system used for testing. RKTB01--WW-ENG, Rev.0, 01/2013 Page 2 of 13

Figure 1. Cold Aisle Containment system test setup. Load Bank Details A total of ten 9 RU rack mountable load banks were used to simulate IT heat loads and airflow in the eight cabinets. Figure 2 shows the front view of a load bank. Each load bank consists of two channels/ halves that can be independently operated to generate heat load and airflow. Each channel of the load bank can generate 4.17kW of heat load with an estimated airflow of 475cfm. Figure 2. Front panel of 9 RU load bank used for testing. RKTB01--WW-ENG, Rev.0, 01/2013 Page 3 of 13

Containment Door O A X C X O B C B A O X C A O X B C A X Load bank channel Perforated Tiles Load Bank arrangement. X positions were not used for any tests. A,B,C, and O units were used for 62.6kW tests. (15 units were used) B,C, and O units were used for 45.9kW tests. (11 units were used) Only O units were used for 16.7kW tests. (4 units were used) Figure 3. Load bank arrangement within each cabinet. The load bank channels were systematically opened or closed to simulate the desired heat load conditions, as described in Figure 3, which also shows the load bank distribution among the cabinets. The inlets of the unused channels were blanked off with cardboard and duct tape. This was done to avoid any airflow leakage through these unused channels, which could be substantial in a pressurized air containment environment. Sensors The lab was equipped with various sensors to characterize the thermal and airflow pattern within the room space. Tridium JACE-700 hardware and AX Supervisor software were used to collect, store, and process data from the room sensors (e.g. power meters, thermocouples, pressure sensors) and from the CRAC unit. Temperature sensors were located at the middle and top of each cabinet. A temperature sensor was also placed at the CRAC unit return, while a supply air temperature sensor was placed in the under-floor plenum space, 5 feet away from the CRAC unit. Additional temperature sensors were installed in the lab to monitor the average room temperature. Pressure sensors were installed to monitor the pressure drop across the CRAC unit, the under-floor plenum pressure, and the pressure at the bottom and top of the cold aisle containment. RKTB01--WW-ENG, Rev.0, 01/2013 Page 4 of 13

(a) Schematic of the test lab (top view) with temperature sensors (b) Schematic of the test lab (left-side view) with pressure sensors CAC Top CAC Bottom CRAC Plenum CRAC Delta P Figure 4. (a) Location of temperature sensors (top view). (b) Location of pressure sensors (left-side view). RKTB01--WW-ENG, Rev.0, 01/2013 Page 5 of 13

Figure 4 shows the locations of these temperature and pressure sensors. To monitor the IT equipment load, all the load banks were connected to Panduit s 208V 3 phase metered Power Outlet Units (capable of monitoring the current in amps), which were connected to the JACE (Java Application Control Engine) for collecting and storing the data. The JACE unit collected information on the fan speed, compressor utilization and return/supply air temperature directly from the CRAC unit. Testing Details A total of ten tests were conducted, one for open HA/CA (without a CAC) as a baseline scenario and nine others with CAC at three heat load settings on the load banks (62.6kW, 45.9kW, and 16.7kW) for three different return air set point temperature conditions (75 F, 80 F, and 95 F). The test procedure was identical for all ten test runs. Table 1 shows the heat load and the airflow distribution among the eight cabinets for the 45.9kW scenario. Table 1. Heat load and airflow per cabinet for the 45.9kW- heat load scenario. Cabinet Estimated Heat Load (kw) # Airflow (CFM) 1 4.17 475 2 4.17 475 3 8.34 950 4 8.34 950 5 4.17 475 6 8.34 950 7 4.17 475 8 4.17 475 Total 45.9 5225 Test Procedure The testing for all the scenarios started with running the CRAC unit and the load banks at a specified setting until the room temperature reached a steady state. Once the steady state was reached, the CRAC unit was turned OFF while the load banks were kept running. This simulates a data center power outage condition wherein the CRAC unit fails and the IT equipment keeps running on UPS-backed power. With the CRAC unit turned OFF, the average room temperature was monitored until it reached close to 105 F, after which the CRAC unit was turned back ON to have the room cool down to its initial set point temperature condition. The threshold limit of 105 F for the average room temperature was selected for two reasons: RKTB01--WW-ENG, Rev.0, 01/2013 Page 6 of 13

1. To ensure that the return air temperature did not exceed the CRAC unit s limit; 2. To ensure that the average cabinet inlet air temperature did not greatly exceed 95 F, around which most IT equipment would run the risk of having minor alarms. The tile airflow readings were taken during the initial steady state using a flow hood. A separate set of tile airflow readings were made during the CRAC unit failure. Note that these readings were done separately to avoid any changes in the room settings for the transient runs. Also, the flow hood readings were later compared to the tile airflow values calculated using the tile delta P readings and the tile specification data provided by the manufacturer. The sum of the tile airflow readings from these two methods differed by only 80cfm. Test Data and Analysis This section summarizes the relevant transient data from the test runs and describes the findings using the 45.9kW heat load scenario with an 80 F CRAC return air set point temperature. A ride-through time is the duration for which the IT equipment inlet air temperature stayed below the threshold limit of 95 F after the CRAC unit was turned OFF. A baseline scenario with open HA/CA (without CAC) was run for the 45.9kW heat load scenario. Fan Speed and Compressor Utilization (%) 100 80 60 40 20 0 Failure Time 0 10 20 30 40 50 60 Fan Speed Time (min) Compressor Utilization Figure 5. Fan speed and compressor utilization for 45.9kW heat load and 80 F return temperature test case (Without CAC). Figure 5 shows the CRAC fan speed and compressor utilization vs. time for the baseline scenario with open HA/CA (without CAC). The process of turning the CRAC unit ON and OFF was done manually, however, the data for the fan speed and the compressor utilization was collected directly from the built-in control module of the CRAC unit. Once the CRAC unit reached the initial steady state, the unit was turned OFF for the time interval represented by Failure Time in the figure. The CRAC unit was restarted once the average room temperature reached close to the critical limit of 105 F. Upon the restart of the CRAC RKTB01--WW-ENG, Rev.0, 01/2013 Page 7 of 13

unit, the compressor utilization ran in steps of 50%, demonstrating that one compressor reached its maximum before the second compressor kicked in. Figure 6 shows the temperature data for the baseline open HA/CA scenario. During normal operation, the CRAC unit maintained an 80 F return air set point temperature by regulating its fan speed and its compressor speed. The maximum cabinet inlet temperature was close to 65 F which was well below the critical limit of 95 F. However, after the CRAC unit failed (turned OFF), both the cabinet inlet air temperature and the average room temperature rose fairly quickly to the unacceptable limits. It took only four minutes for the maximum cabinet inlet air temperature to reach 95 F (ride-through time) and nine minutes for the room average temperature to reach near 105 F. Figure 6. Temperature response for 45.9kW heat load and 80 F return temperature test case (without CAC). The same test scenario was run with the addition of the CAC system to determine its effect on the ridethrough time. Figure 7 shows the temperature data vs. time. During normal operation, the CRAC unit supplied cool air at 55 F to the plenum space with the average cabinet inlet air temperature of 61 F. The temperature difference of 6 F between the supply air and the cabinet inlet air temperature suggests some heat gain by the cold air on its path to the cabinet inlet. The room average temperature reading was similar to the recorded return air temperature. After the CRAC unit fails, the supply air temperature and the cabinet inlet air temperature are expected to rise very quickly and eventually reach the room ambient temperature. However, the test results showed that even though the supply air temperature rose, it did not reach the return air temperature within the ride-through time. It took approximately 19 minutes for the RKTB01--WW-ENG, Rev.0, 01/2013 Page 8 of 13

maximum cabinet inlet air temperature and the room average temperature to reach the critical limits of 95 F and 105 F, respectively. Figure 7. Temperature response for 45.9kW heat load and 80 F return temperature test case (with CAC). The longer ride-through time for the test run with CAC can be explained by two phenomena that are often ignored when discussing a transient case of cooling equipment failure. First, in the CAC system the IT equipment can pull the cool air from the plenum space and through the cooling unit. The plenum space acts as a cold air reservoir from which the IT equipment can pull air during a cooling failure. Second, the CRAC unit has an inherent cooling storage capability due to the thermal mass of its various components (e.g. heat-exchanger coils, working coolant fluid, blowers and compressors, etc.). The stored thermal mass of the cooling unit absorbs heat from the circulating air and keeps the cabinet inlet air temperature within acceptable limits. However, for the baseline open HA/CA scenario, the load banks were not able to take advantage of the pool of cold air available in the plenum space or the cold thermal mass stored in the CRAC unit to drive the transient behavior. Without a CAC, the load banks were pulling their airflow requirement from the room, which is the path of least resistance for airflow, and recirculated the warm room air. The tile airflow measurement data also confirmed no airflow came out of the tiles when the CRAC was not running. The pressure data for the CAC test run confirmed the airflow through the CRAC unit during the cooling failure. Figure 8 shows the data from the pressure sensors for the scenario with CAC. With the CRAC unit RKTB01--WW-ENG, Rev.0, 01/2013 Page 9 of 13

running, a zero pressure in the CAC (CAC Top Pressure) indicates an exact supply of airflow by the CRAC unit in the CAC to satisfy the load banks airflow demand. The pressure difference across the tiles indicates airflow coming from the plenum space into the containment system. During the CRAC failure, the CAC pressure went negative, indicating insufficient supply of airflow in the CAC to satisfy the load banks airflow demand. A positive value of tile delta P indicates a flow from the plenum into the containment even during the CRAC failure. Also, a negative delta P across the CRAC unit (the plenum pressure is more negative than the room pressure) suggests that the room airflow recirculates into the plenum space through the CRAC unit. The flow hood tile airflow readings later confirmed this observation. Once the CRAC unit failed, the fans on the load banks pulled air from the plenum space and through the unit. This suggests that a well-sealed cold aisle containment system enables the IT equipment to pull the airflow through the perforated tiles, not from the room. 0.25 0.2 Pressure (In. H2O) 0.15 0.1 0.05 0-0.05-0.1 Failure Time 0 10 20 30 40 50 60 Time (min) Tile Delta P CRAC Delta P CAC Top Figure 8. Pressure readings for 45.9kW heat load and 80 F return temperature test case (with CAC). The remaining eight scenarios with the CAC system were performed to study the effect of IT load and air set point temperature conditions on the room transient behavior during a cooling equipment failure. Figure 9 shows the sum of the airflow through the tiles with and without the CRAC unit running for different IT heat load conditions and an 80 F return air temperature set-point condition. During the CRAC failure, the load banks were able to pull a significant amount of airflow through the tiles (ranging from 24% - 55%) compared to the total tile airflow when the CRAC unit was operating. As shown in the chart, the total tile airflow increased with the rise in the heat load. During normal operation, the CRAC unit pushed more airflow to support the higher IT heat loads. When the CRAC unit was not running, the higher IT heat load RKTB01--WW-ENG, Rev.0, 01/2013 Page 10 of 13

pulled more airflow through the tiles and the CRAC unit. It is expected that the amount of airflow through the tiles will vary with the IT equipment type (e.g. the blade servers with strong fans would be able to pull more airflow through the tiles than the 1RU or 2RU servers with weaker fans), which in turn would affect its ride-through time. It is also expected that the tile percentage opening and any other airflow resistances will affect the volume of air the IT equipment can pull from the plenum and the cooling unit. Total Tile Airflow (CFM) 8000 7000 6000 5000 4000 3000 2000 1000 0 CRAC ON at 80 F Return CRAC OFF 62.6kW 45.9kW 16.7kW CRAC Status Figure 9. Summary of tile airflow for different heat loads at 80 F return air temperature (with CAC). Figure 10 shows the ride-through time (time for the maximum cabinet inlet air temperature to reach 95 F) for all nine scenarios with the CAC system. The ride-through time becomes shorter for higher IT heat loads and for higher return air temperature set point conditions. Therefore, it was determined that the initial room air temperature and the IT equipment heat load strongly dictate the ride-through time of the data center during cooling equipment failure. Return Temperature Set Point ( F) 95 80 75 0 10 20 30 40 50 60 70 80 90 62.6kW 45.9kW 16.7kW Time for maximum cabinet inlet air temperature to reach 95 F (min) Figure 10. Summary of ride-through time for all the test cases. RKTB01--WW-ENG, Rev.0, 01/2013 Page 11 of 13

Analytical Modeling Panduit developed an analytical tool based on thermodynamic principles and conservation laws to predict the data center transient behavior. The tool can analyze the data center transient response with different types of cooling equipment failure, such as: chiller failure, pump failure, and CRAC fan failure. Figure 11 compares the predicted room average temperature from the tool to the measured data for the 45.9kW heat load case with an 80 F return temperature. Computational Fluid Dynamics (CFD) Modeling In addition to the analytical tool, Panduit created a detailed CFD model of the lab space to capture the room transient behavior and compare the results with the test data. Typically, CFD simulations are used to study the steady state thermal behavior of the data center. The dynamic environment of the data center necessitates the use of the individual thermal masses of the various objects when modeling transient behavior. Therefore, objects such as cabinets, load banks, perforated tiles, solid floor tiles, the containment structure, and the CRAC unit are all modeled with their true weights and accounted for as thermal masses in the CFD model. Figure 11 shows the room average temperature data vs. time and compares the data for all three methods for the 45.9kW heat load with an 80 F return air set point condition. Note that time 0 in the plot represents the start of the cooling failure. The results from the CFD simulation and from the analytical tool match within 2 F for the entire transient duration with the test data. RKTB01--WW-ENG, Rev.0, 01/2013 Page 12 of 13

110 Room Average Temperature ( F) 105 100 95 90 85 80 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Time (min) Test Data Transient Tool (Analytical Model) CFD Data Figure 11. Comparison between the test data and the Panduit Transient Tool for the 45.9kW heat load case with an 80 F return temperature (with CAC). Conclusion A well-sealed cold aisle containment system not only offers a better thermal environment for the IT equipment but can also provide a longer ride-through time in case of cooling failure. For the scenario tested, the cold aisle containment system offered almost five times longer ride-through time for the IT equipment than without it. It is expected that the IT equipment s ability to pull the air through the cooling unit increases with the proper sealing of the containment system. Although not shown here, the analytical and CFD data for other types of air containment systems (e.g. chimney cabinets) revealed similar behavior and highlighted the advantage of having containment systems, even during the cooling failure. References Ponemon Institute, 2010, National Survey on Data Center Outages. Intel Corporation, 2011, Data Center 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences, White Paper DataCenter 2020. RKTB01--WW-ENG, Rev.0, 01/2013 Page 13 of 13