Data Center Cooling & Air Flow Management Arnold Murphy, CDCEP, CDCAP March 3, 2015
Strategic Clean Technology Inc Focus on improving cooling and air flow management to achieve energy cost savings and improve operational environment Services include: Cooling Audit to identify aspects of DC that are causing low cooling efficiency and high operating costs. Energy Improvement provide business case, estimate energy savings, ROI and implement appropriate improvements Computational Fluid Dynamics (CFD) studies to determine life expectancy of facility, equipment layout and air management strategies Training on data centre cooling and air flow best practices Energy Rebate Interface work with local utility to receive appropriate rebates Improve cooling efficiency of the data centre, cost effectively with rapid payback.
Where does energy go in the Data Centre? For every kw of power going into the data centre, a kw of heat is generated and must be removed Typical data centre has 4X the amount of cooling required
Energy Use Diagram 1 mw 10% 43% 47% In many data centres IT is not the major consumer of power Cooling stands out as large user question is how to improve and how to measure improvements Metric available to show baseline performance and progress Measuring energy use is necessary to improve management of energy use
What s Changed? Recognition that data centres are major consumers of energy and high proportion is wasted Realizing air flow management is key to effective and efficient cooling practices Best practices and metrics available that can be applied New products to improve air flow Systems that can monitor and measure cooling effectiveness and efficiency Realization that improving cooling efficiency Can have a significantly reduce operating costs Reduce time to manage data centre Extend life expectancy of data centre
But Cooling is a complex resource to manage Invisible Efficiency dictated by a number of factors Facility design / attributes Equipment design and layout Heat density distribution Seemingly unrelated activities can have major impact Pulling additional cabling Cable trays Upgrading / adding equipment more heat generated, more cooling required Reconfiguring perforated tiles
New Operating Guidelines ASHRAE has expanded operational parameters Objective is to reduce cooling costs and operate cooling equipment more efficiently 2008 Classes 2011 Classes Applications Information Technology Equipment Environmental Control 1 A1 Enterprise Servers, Storage Products Tightly Controlled 2 A2 Data Center Volume Servers, Storage Products, Personal Computers, Workstations A3 Some Control Some Control, use of free cooling techniques when allowable A4 Some control, near full-time usage of free cooling techniques 3 B Office, Home, Transportable Environment, etc 4 C Point of sale, Industrial, Factory Personal Computers, Workstations, Laptops, and Printers Point of sale equipment, ruggedized controllers or computers or PDA s Minimal Control No Control
ASHRAE Temperature and Humidity Ranges ASHRAE 2011 Equipment Environmental Specifications Classes Recommended Dry-bulb temperature ( C) A1 to A4 18 to 27 (64.4 to 80.6 F) Humidity range on-condensing 5.5 C DP to 60% RH and 15 C DP Allowable Product Operation Maximum dew point ( C) Maximum rate of change ( C/hr) A1 15 to 32 20 to 80% RH 17 5/20 A2 10 to 35 20 to 80% RH 21 5/20 A3 5 to 40 8 to85% RH 24 5/20 A4 5 to 45 8 to 90% RH 24 5/20
Basic Air Flow Management Air re-circulation without blanking panels Cold and warm air separation with blanking panels 23 C 32 C 23 C 32 C 30 C Blanking Panel 18 C 28 C 18 C 28 C
Raised Floor Air Flow
Air Bypass 40-60% of available supply air is short cycled back to cooling units Mixes with exhaust air from racks, lowers return air temp to CRAC s and reduces efficiency Bypass air originates from number of sources Excess air volume relative to heat load Unsealed cable cut out openings under rack Mis-located perforated tiles in hot aisles
Computational Fluid Dynamics Software modeling that simulates air flow conditions Can conduct what if scenarios to determine best alternatives to accommodate changes such as growth, equipment layout or new cooling demands Can be applied to any data centre environment Existing facilities to identify how to correct air flow and cooling issues Extensions / expansion of facilities to determine what cooling is required and where best placed Project life expectancy of facility with rack heat load increases and/or cooling upgrades Simulate CRAC failure to determine impact on equipment what level of redundancy is required New builds to evaluate design and equipment layout and how to optimize Placement of CRAC s and rack equipment Number and size of CRAC s Overhead, in-row or perimeter cooling Supply / return plenums Modular build out of facility
Model Comparisons Baseline Modified Hot / cold aisle not pervasive Hot exhaust feeding into inlet Under rack cut-outs Limited perforated tiles Cable trays and cabling in supply plenum Reconfigured racks to hot / cold aisle Inserted perforate tiles in cold aisle Closed under rack cable cut-outs Repositioned CRAC s Removed cable trays
Rack Thermographic Baseline Modified RCI_HI (Rack Cooling Index - High) 1 (%) 54.55 RCI_LO (Rack Cooling Index - Low) 1 (%) 75.02 RCI_HI (Rack Cooling Index - High) 1 (%) 99.4 RCI_LO (Rack Cooling Index - Low) 1 (%) 45.06 More consistent inlet temperatures, opportunity to increase supply temperature Potential to increase rack loadings Increased energy efficiency
CRAC Air Flow Pathlines Baseline Modified Smoother air flow less turbulence No high pressure areas
Static pressure map Baseline Modified
Summary Cooling is one of the largest costs for data centre operation A cooling audit should be conducted to identify areas for improvement CFD is an effective tool to evaluate changes and determine the impact Improvements to cooling efficiency have fast payback Incentives/rebates are available to defer costs and shorten payback periods.