11 Top Tips for Energy-Efficient Data Center Design and Operation



Similar documents
Managing Data Centre Heat Issues

How Row-based Data Center Cooling Works

Air, Fluid Flow, and Thermal Simulation of Data Centers with Autodesk Revit 2013 and Autodesk BIM 360

Using CFD for optimal thermal management and cooling design in data centers

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

Data Center Power Consumption

Fundamentals of CFD and Data Center Cooling Amir Radmehr, Ph.D. Innovative Research, Inc.

How To Improve Energy Efficiency Through Raising Inlet Temperatures

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Improving Data Center Energy Efficiency Through Environmental Optimization

How To Improve Energy Efficiency In A Data Center

Rack Hygiene. Data Center White Paper. Executive Summary

Data center upgrade proposal. (phase one)

Power and Cooling for Ultra-High Density Racks and Blade Servers

Effect of Rack Server Population on Temperatures in Data Centers

Data Centre Cooling Air Performance Metrics

Airflow Simulation Solves Data Centre Cooling Problem

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Data Center Cooling & Air Flow Management. Arnold Murphy, CDCEP, CDCAP March 3, 2015

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

Reducing the Annual Cost of a Telecommunications Data Center

Benefits of. Air Flow Management. Data Center

FNT EXPERT PAPER. // Data Center Efficiency AUTHOR. Using CFD to Optimize Cooling in Design and Operation.

Data Center Equipment Power Trends

Optimum Climate Control For Datacenter - Case Study. T. Prabu March 17 th 2009

Introducing Computational Fluid Dynamics Virtual Facility 6SigmaDC

Greening Commercial Data Centres

APC APPLICATION NOTE #92

Prediction Is Better Than Cure CFD Simulation For Data Center Operation.

Analysis of the UNH Data Center Using CFD Modeling

Choosing Close-Coupled IT Cooling Solutions

Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc.

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

AEGIS DATA CENTER SERVICES POWER AND COOLING ANALYSIS SERVICE SUMMARY

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

Specialty Environment Design Mission Critical Facilities

Thinking Outside the Box Server Design for Data Center Optimization

IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT

Measuring Power in your Data Center: The Roadmap to your PUE and Carbon Footprint

TRANSIENT AIR TEMPERATURE MEASUREMENTS IN A DATA CENTER

GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY. Public Sector ICT Special Working Group

Boosting the Energy efficiency of a Server Room: experimental + virtual approach.

Challenges In Intelligent Management Of Power And Cooling Towards Sustainable Data Centre

Office of the Government Chief Information Officer. Green Data Centre Practices

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Control of Computer Room Air Handlers Using Wireless Sensors. Energy Efficient Data Center Demonstration Project

ICT and the Green Data Centre

Environmental Data Center Management and Monitoring

Reducing Data Center Energy Consumption

Reducing Room-Level Bypass Airflow Creates Opportunities to Improve Cooling Capacity and Operating Costs

AIR-SITE GROUP. White Paper. Green Equipment Room Practices

FEAR Model - Review Cell Module Block

Green Data Centre Design

DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences

Increasing Energ y Efficiency In Data Centers

Energy Performance Optimization of Server Room HVAC System

Containment Solutions

Element D Services Heating, Ventilating, and Air Conditioning

Case Study: Innovative Energy Efficiency Approaches in NOAA s Environmental Security Computing Center in Fairmont, West Virginia

How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems

DataCenter 2020: first results for energy-optimization at existing data centers

Data Center Components Overview

How To Run A Data Center Efficiently

Statement Of Work Professional Services

abstract about the GREEn GRiD

Strategies for Deploying Blade Servers in Existing Data Centers

Data Center Technology: Physical Infrastructure

Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions

Data Centre/Server Room Containment: How to Get More Cooling Capacity without Adding Additional AC

Data Centre Energy Efficiency Operating for Optimisation Robert M Pe / Sept. 20, 2012 National Energy Efficiency Conference Singapore

Airflow and Cooling Performance of Data Centers: Two Performance Metrics

APC APPLICATION NOTE #112

BRUNS-PAK Presents MARK S. EVANKO, Principal

Data Centers: How Does It Affect My Building s Energy Use and What Can I Do?

Data centers have been cooled for many years by delivering

Statement Of Work. Data Center Power and Cooling Assessment. Service. Professional Services. Table of Contents. 1.0 Executive Summary

The Mission Critical Data Center Understand Complexity Improve Performance

Computational Fluid Dynamic Investigation of Liquid Rack Cooling in Data Centres

Thermal Optimisation in the Data Centre

Raised Floor Data Centers Prepared for the Future

Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers

Impact of Virtualization on Data Center Physical Infrastructure White Paper #27. Tom Brey, IBM Operations Work Group

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers

Great Lakes Data Room Case Study

Improving Data Center Efficiency with Rack or Row Cooling Devices:

Power and Cooling Guidelines for Deploying IT in Colocation Data Centers

Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers

Selecting Rack-Mount Power Distribution Units For High-Efficiency Data Centers

RAISED-FLOOR DATA CENTER: PERFORATED TILE FLOW RATES FOR VARIOUS TILE LAYOUTS

Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER

Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED.

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

FAC 2.1: Data Center Cooling Simulation. Jacob Harris, Application Engineer Yue Ma, Senior Product Manager

Energy Efficient Thermal Management for Information Technology Infrastructure Facilities - The Data Center Challenges

Using Simulation to Improve Data Center Efficiency

QUALITATIVE ANALYSIS OF COOLING ARCHITECTURES FOR DATA CENTERS

How High Temperature Data Centers and Intel Technologies Decrease Operating Costs

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

Transcription:

11 Top Tips for Energy-Efficient Data Center Design and Operation A High-Level How To Guide M e c h a n i c a l a n a l y s i s W h i t e P a p e r w w w. m e n t o r. c o m

When is Data Center Thermal Design Important? The usage of data centers globally has skyrocketed in the past three to four years in response to growing demand for information storage and transfer from banks, hospitals, government bodies, telecom operators and hosting facilities. Data center power load (and therefore heat dissipation) footprints are continuing to rise, with greenhouse gas emissions from data centers expected to overtake airline industry emissions by 2020 [Ref. 1]. Data centers already consume ~2% of total US electricity, growing at ~12% CAGR [Ref. 2]. Cooling constitutes a major cost in the operation of a data center. This has led to an increased focus on minimizing energy use in data centers and thereby reduced operating costs. A consistent design goal is to ensure that the Power Usage Effectiveness (PUE) is kept as low as possible, namely, close to unity (i.e. 1.0). When is Data Center Thermal Design Essential? Power densities of server racks continue to increase, from around 1-3 KW just a few years ago to 24-30 KW per rack currently. Thermal issues are a cause of problems, in that the servers and switches must be kept cool. In particular, the failure rate of electronic products rises sharply with elevated temperatures, while high thermal stresses on solder joints from large temperature variation is another source of failure. With the increasing emphasis on reliability rates available from data centers, this is an obvious concern. This in turn has led to an increased focus on monitoring and reporting in data centers to ensure that data equipment does not fail. Data Center Design Monitoring and reporting are important in an operational data center. For new builds at today s server power levels and in instances where same space is retro-fitted, or old servers are swapped out for new in existing racks, there is increased likelihood of thermal problems due to ever higher power densities. This makes thermal design, proven by CFD simulation, essential if the design capacity of the data center is to be reached. 1: Start When the Data Hall is Designed The best time to consider the thermal design of the data center is pre-build (for a green field construction). This means that the following can be considered at the outset: The overall HVAC system will be, for example, a raised floor design, in-line cooling approaches, common or individual rack chimneys, dedicated cooling units for racks, ceiling units, etc. Once the HVAC system has been determined, the wiring and plumbing arrangements Methodologies for hot swapping of Computer Room Air Conditioning (CRAC) and other cooling units, individual servers and/or racks, redundancy, etc. Note that retrofitting and troubleshooting can and should also be addressed using CFD models, as some changes could be very expensive, for example, switch from raised floor to in-line cooling. The use of CFD can ensure that wise choices are made for legacy data centers, confirmed by simulation prior to physically making a change. As a result of following this advice, Mentor Graphics Corporation s own regional data centers in Wilsonville and Shannon now operate with a PUE of around 1.15 to 1.2. There is no real data available on the worldwide spread for PUE values globally, but many data centers have a PUE of 3.0 or greater and that a value of 1.6 should be achievable. Measurements completed by Lawrence Berkeley National Labs, which show that 22 data centers measured had PUE values in the 1.3 to 3.0 range [Ref. 3]. 2 [10]

2: Build A Simple Model of the Data Hall Include key features such as overall structure, raised floor (if present), air handling/cooling units, racks/servers, large floor plenum blockages (if present) and floor tiles (if present). The following is a model of a small data center provided with FloTHERM that can be used for tutorial purposes, or used as the starting point to build a model of another data center. CRAC Unit Racks Floor Tiles Floor Plenum Cable Tray Blockage Figure 1: Model of a Conventional Raised Floor Data Center As Supplied with FloTHERM V10 3: Import Data from CAD The design of the data center can be imported from a CAD model, from DXF files, or from interrogation of appropriate data from DCIM software or similar, using the FloTHERM FloXML schema. The latter will be discussed in more detail later. If required, items can be substituted with appropriate FloTHERM SmartPart or library representations. 4: Use Intelligent Models of Data Center Components FloTHERM provides SmartParts for CRAC units, as well as perforated tiles, coolers, racks and servers. Intelligence built into the SmartPart representations provides greater accuracy. For example, a Rack SmartPart correctly accounts for the stratification of air in the rack and servers within it. Appropriate grid can be added by defining suitable Grid Constraints, etc. to ensure that it is always represented with enough mesh for the analysis. The Pattern functionality in FloTHERM is very useful in terms of defining repeating rows of racks and/or floor tiles. 3 [10]

Figure 2: Cooler SmartPart Construction Dialog in FloTHERM In this first version of the model, the relative performance of the data center can be easily established using a relatively simple representation of each rack. In particular, the rack would be modelled as a single block using the FloTHERM Rack SmartPart, with an appropriate power dissipation and either flow rate through it or temperature rise of the airflow between inlet and outlet of the rack. Similarly, floor tiles could be modelled using a simple pressure drop vs. velocity representation, using the Perforated Plate SmartPart. 5: Consider Design Alternatives At this stage the aim is to explore as many design options as possible to select the most suitable design for the data hall. This will depend on many factors, including: The power density of the racks in the data center and their distribution. While racks tend to be clustered based on function or power density, cooling requirements can vary significantly within the space, potentially needing supplementary cooling. The space envelope that is available for the design. For example, the use of a chimney on the racks may require upper level real estate which is currently used for wiring, lighting, etc. Having open aisles of racks is the simplest option, with individual floor tiles or in-line coolers providing cold air to the racks. Typically, this is arranged in hot air and/or cold air designs, where the outlets or inlets of rows of racks face each other respectively. However, hot air from the environment can easily be drawn back into the racks, bypassing the cooling infrastructure. As a result, the cooling will be ineffective, requiring far more cooling than should be necessary to keep the IT equipment sufficiently cool, and hence have a relatively high PUE. There are a number of strategies that can be followed to improve the thermal design of the data hall, for example: Cold aisle containment Hot aisle containment In-line cooling units Rear-door heat exchangers Dedicated rack cooling units, for example, liquid cooling Overhead HVAC system Dedicated rack chimneys/ducting to exhaust Evaporative free air cooling, supplemented with CRAC units 4 [10]

Figure 3: Overhead HVAC System with Dedicated Rack Chimneys All of these options can be considered quickly and effectively using CFD. 6: Investigate Different Operational Scenarios CFD can be used to model the change in conditions that occur as the data center is populated. While data centers are generally populated based on functionality or power density, different strategies may be employed in terms of population ramp-up and intended life cycle. For example, data centers may have significant expansion room to be filled over a defined time period. At low capacity, some cooling strategies result in much higher PUE than others, so a design that gives the lowest total energy consumption over the design life of the data center should be the aim. Using CFD to ensure that the cooling can be efficiently matched to the data center capacity, and to identify how best to populate the data center capacity will minimize lifetime operating cost. By the same token, different operating conditions can be considered as iterations to the baseline model, including, for example, variation in the power loading (and therefore thermal load), flow rates and operation of cooling units, etc. when asset utilization is low. CFD can also be used to investigate the maximum power that can be applied to each rack for a given cooling strategy to help understand how the data center design will cope with future, higher power servers for example. CFD can also be used to determine the impact of equipment failure, for example, the time varying (transient) temperature response at important locations due to a CRAC unit failure. 7: Use Capture Index to Judge Design Worthiness Consideration of the Capture Index (CI) [Ref. 4], provides an important tool in the design of the data center. The temperature distributions in a data center are not always the most effective means of determining whether a design is good or not and, therefore, may not guide the designer in trying to make improvements. In particular, the temperature distributions are a symptom of the data center design as opposed to the cause. Two forms of Capture Index are available: Cold Aisle CI and Hot Aisle CI (see Appendix). Both are typically expressed as a percentage. The closer they are to 100%, the closer to the cooling system s performance is to the ideal situation. This data is readily tabulated for quick and easy comparison between alternate designs. 5 [10]

Figure 4: Examples of Hot Aisle and Cold Aisle Capture Index Calculation Respectively 8: Minimize the Model Run Time FloTHERM s unique Localized Grid technology allows the user to obtain rapid simulation turnaround times. Figure 5: Traditional Data Center Design Showing Localized Mesh Through Racks. 6 [10]

Localized gridding prevents mesh bleeding, i.e. unnecessarily small mesh cells extending beyond the geometry out into the surrounding air. The functionality is therefore appropriate in data centers in that it more clearly separates these coarse and fine grid regions. Given that these mesh lines bleed in all three directions, and the total mesh is the multiple of the number of cells in all three directions, cell counts can be reduced dramatically by using this technology. The overall airflow and temperature distributions in a data center are relatively unaffected by typical perturbations to variables such as heat dissipation of racks, CRAC/cooler operating conditions, etc. The data center is therefore a particularly good application in which to use the results from an existing baseline model as the starting point for the next analysis in order to reduce solution time, as this reduces the number of iterations needed to investigate different operational scenarios, and can also help if layout changes are made. Finally, the speed up in the parallel processor solver in FloTHERM V10 seems to be particularly well suited to data centers, with speedups in some case above a factor of 10 compared to the previous version. 9: Use Command Center to Optimize Data Center Design The Design of Experiment and Optimization capabilities included in the Command Center, which forms part of the standard FloTHERM installation, allows the user to optimize the design of the data center. For example, the user could optimize the flow rate through a CRAC unit to achieve a target inlet temperature to a particular rack or array of racks. Figure 6 shows an example of the Command Center dialog box accessing the flow rate variable. Note that other variables, such as the CRAC capacity, are also available. Figure 6: Example Dialog Box shows Access to CRAC Unit Variables from Data Center Example Case 7 [10]

Data Center Commissioning 10: Create a Spreadsheet-Based Front-End The FloXML schema included in the FloTHERM software allows the creation/running/post-processing of a data center model without the need to actually open up FloTHERM. It is therefore well suited for non-expert CFD users, for example, field engineers who may be asked to troubleshoot a data center design. Data can be entered directly into a spreadsheet, or can be parsed from a third party source, for example, a DCIM software tool that contains the information needed for the analysis, for example, rack physical dimensions, power dissipations, etc. This configurable spreadsheet frontend can then create FloXML definitions that are themselves a ready-to-solve (on the command line) FloTHERM model. The results can then be viewed using the free-to-download FloVIZ software, passed back to a post-processing tool or captured in the spreadsheet. This process is shown as an example in Figure 7, with an example of the FloXML file itself shown in Figure 8. Examples of such spreadsheets and FloXML files are installed with the FloTHERM software in V10 onwards. XML FloTHERM solver Results FloVIZ (Results Viewer) Figure 7: Excel-based CFD Analysis Process And Result Post-Processing 8 [10]

Data Center Operation 11: Simulate The Impact of Change Orders During operation, whenever a change order is raised to add new IT assets, or move existing assets, the impact of this on the overall operation of the data center can be checked before they are implemented by making the necessary changes in the spreadsheet created above. In the case of conventional designs with perimeter computer room air conditioning (CRAC) units, introducing new assets or moving existing assets can have unforeseen consequences on equipment in remote locations due to changes in the overall airflow patterns within the data center. Other cooling strategies, although superior in terms of recirculation, stratification and bypass are not without challenges in terms of equipment cooling. Closing Remarks Conventional perimeter CRAC unit layouts focus on the room-level design, with the racks and the equipment they contain considered as an afterthought. More recent design approaches such as aisle containment, and in particular liquid-cooled racks, focus on the cooling of the individual equipment and bypass the need for full room/cold aisle air cooling. In both cases there is an assumption that the airflow environment within the data hall will accommodate the cooling approach from commissioning through to full utilization. We advocate a holistic approach in which the cooling strategy is proven to achieve this business goal from the outset by CFD simulation during data center design, with CFD used through commissioning and operation to ensure that asset deployment allows for utilization up to the design capacity against a backdrop of changing business needs and increasing equipment power consumption. Acknowledgements Dr John Parry, CEng, MBCS, MIEEE Andy Manning, PhD Figure 8: XML Schema example for Data Center case 9 [10]

References APPENDIX: Definitions 1. The Truth About Data Centre Cooling http://www.bcs.org/upload/pdf/jeremy-hartley.pdf 2. Koomey, Jonathan (2008) Worldwide electricity used in data centers. Environmental Research Letters. vol. 3, no. 034008. September 23. http://stacks.iop.org/1748-9326/3/034008 3. PUE : A COMPREHENSIVE EXAMINATION OF THE METRIC, The Green Grid, White Paper #49, 2012 https://www.thegreengrid.org/en/global/content/white-papers/wp49-pueacomprehensiveexaminat ionofthemetric 4. VanGilder, J.W. and Shrivastava, S.K. (2007) Capture Index: An Airflow-Based Rack Cooling Performance Metric. ASHRAE Transactions, Vol. 113, Part 1, pp.126-136. Definition of Power Usage Effectiveness (PUE) Definition of Power Usage Effectiveness (PUE) is defined as: PUE = Total Facility Energy IT Equipment Energy Reference 1 indicates that there is no real data on the worldwide spread for values globally, but that a PUE value of 1.6 should be achievable (page 13). Definition of Capture Index (CI) Two forms of Capture Index are available: Cold Aisle CI: Fraction of air ingested by the rack that originates from local cooling resources (e.g. perforated floor tiles or local coolers). This is generally expressed as a percentage. Cold Aisle CI is generally used to assess cooling performance in data centers with raised-floor cooling architectures. Hot Aisle CI: Fraction of air exhausted by a rack which is captured by local extracts (e.g. local coolers or return vents). This is generally expressed as a percentage. Hot Aisle CI is generally used to assess the cooling performance in architectures where row-based coolers or return vents are employed. For the latest product information, call us or visit: w w w. m e n t o r. c o m 2014 Mentor Graphics Corporation, all rights reserved. This document contains information that is proprietary to Mentor Graphics Corporation and may be duplicated in whole or in part by the original recipient for internal business purposes only, provided that this entire notice appears in all copies. In accepting this document, the recipient agrees to make every reasonable effort to prevent unauthorized use of this information. All trademarks mentioned in this document are the trademarks of their respective owners. MGC 01-14 TECH11800