How To Calculate Total Cooling Requirements For Data Center



Similar documents
Calculating Total Cooling Requirements for Data Centers

Avoiding Costs from Oversizing Data Center and Network Room Infrastructure

Avoiding Costs from Oversizing Data Center and Network Room Infrastructure

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

Electrical Efficiency Modeling of Data Centers

Cooling Strategies for IT Wiring Closets and Small Rooms

Electrical Efficiency Modeling for Data Centers

The Different Types of UPS Systems

Calculating Space and Power Density Requirements for Data Centers

The Different Types of UPS Systems

Re-examining the Suitability of the Raised Floor for Data Center Applications

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

High-Efficiency AC Power Distribution for Data Centers

Cooling Strategies for IT Wiring Closets and Small Rooms

Overload Protection in a Dual-Corded Data Center Environment

Power and Cooling Capacity Management for Data Centers

A Scalable, Reconfigurable, and Efficient Data Center Power Distribution Architecture

Eco-mode: Benefits and Risks of Energy-saving Modes of UPS Operation

Increasing Data Center Efficiency by Using Improved High Density Power Distribution

Data Center Projects: Project Management

Guidance for Calculation of Efficiency (PUE) in Data Centers

Efficiency and Other Benefits of 208 Volt Over 120 Volt Input for IT Equipment

Allocating Data Center Energy Costs and Carbon to IT Users

AC vs. DC Power Distribution for Data Centers

Strategies for Deploying Blade Servers in Existing Data Centers

An Improved Architecture for High-Efficiency, High-Density Data Centers

Data Center Projects: Project Management

Electrical Efficiency Measurement for Data Centers

Guidelines for Specification of Data Center Power Density

Implementing Energy Efficient Data Centers

Calculating Total Power Requirements for Data Centers

Dynamic Power Variations in Data Centers and Network Rooms

Guidance for Calculation of Efficiency (PUE) in Data Centers

Cooling Audit for Identifying Potential Cooling Problems in Data Centers

Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms

Four Steps to Determine When a Standby Generator is Needed for Small Data Centers

A Scalable, Reconfigurable, and Efficient Data Center Power Distribution Architecture

The Advantages of Row and Rack- Oriented Cooling Architectures for Data Centers

2006 APC corporation. Cooling Solutions and Selling Strategies for Wiring Closets and Small IT Rooms

Choosing Close-Coupled IT Cooling Solutions

AC vs DC Power Distribution for Data Centers

The Different Types of UPS Systems

Selecting an Industry-Standard Metric for Data Center Efficiency

The following are general terms that we have found being used by tenants, landlords, IT Staff and consultants when discussing facility space.

> Executive summary. Explanation of Cooling and Air Conditioning Terminology for IT Professionals. White Paper 11 Revision 2. Contents.

Glossary of Heating, Ventilation and Air Conditioning Terms

Power and Cooling for Ultra-High Density Racks and Blade Servers

The Efficient Enterprise. Juan Carlos Londoño Data Center Projects Engineer APC by Schneider Electric

Raised Floors vs Hard Floors for Data Center Applications

Specification of Modular Data Center Architecture

Cooling Small Server Rooms Can Be. - Jim Magallanes Computer Room Uptime: Uptime Racks:

Data Center Temperature Rise During a Cooling System Outage

Data Center Technology: Physical Infrastructure

How to Select a Colocation Provider Offering High Performance Computing

Defining Quality. Building Comfort. Precision. Air Conditioning

Residential HVAC Load Sizing Training October 14, David Kaiser Green Code Plan Reviewer

Improving Data Center Energy Efficiency Through Environmental Optimization

APC APPLICATION NOTE #112

Presentation Outline. Common Terms / Concepts HVAC Building Blocks. Links. Plant Level Building Blocks. Air Distribution Building Blocks

Physical infrastructure: Effective consolidation of space, power, and cooling

How To Improve Energy Efficiency In A Data Center

High Performance Computing (HPC) Solutions in High Density Data Centers

Benefits of. Air Flow Management. Data Center

Data Center Temperature Rise During a Cooling System Outage

How Row-based Data Center Cooling Works

APC APPLICATION NOTE #92

Power and Cooling Guidelines for Deploying IT in Colocation Data Centers

Determining Total Cost of Ownership for Data Center and Network Room Infrastructure

Guidelines for Specification of Data Center Power Density

Data Center Projects: Establishing a Floor Plan

Blade Servers and Beyond: Adaptive Cooling for the Next Generation of IT Systems. A White Paper from the Experts in Business-Critical Continuity

Green Data Centre Design

Hot-Aisle vs. Cold-Aisle Containment for Data Centers

Energy Efficient High-tech Buildings

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Power and Cooling for VoIP and IP Telephony Applications

Creating Efficient HVAC Systems

Impact of Leading Power Factor on Data Center Generator Systems

The Different Types of Air Conditioning Equipment for IT Environments

Managing Data Centre Heat Issues

Humidification Strategies for Data Centers and Network Rooms

Air Conditioning Contractors of America

The Pros and Cons of Modular Systems

Small Data / Telecommunications Room on Slab Floor

Comparing Data Center Power Distribution Architectures

Five Strategies for Cutting Data Center Energy Costs Through Enhanced Cooling Efficiency

Data Bulletin. Mounting Variable Frequency Drives in Electrical Enclosures Thermal Concerns OVERVIEW WHY VARIABLE FREQUENCY DRIVES THERMAL MANAGEMENT?

Hybrid Geothermal / Solar Thermal HVAC System: Part1 Design (OU Human Health Building)

PTAC Selection Guide. Optimize your spend with the right PTAC equipment!

DATA CENTER COOLING INNOVATIVE COOLING TECHNOLOGIES FOR YOUR DATA CENTER

Data Center Projects: Project Management

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

> Executive summary. Data Center Projects: Standardized Process. White Paper 140 Revision 1. Contents. by Neil Rasmussen and Suzanne Niles

Tech Byte 27: Small Computer Room and IT Closet Cooling

HVACPowDen.xls An Easy-to-Use Tool for Recognizing Energy Efficient Buildings and HVAC Systems

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

How To Improve Energy Efficiency Through Raising Inlet Temperatures

HVAC Infrastructure Retrofit Solutions

How To Power And Cool A Data Center

Creating Order from Chaos in Data Centers and Server Rooms

Transcription:

Calculating Total Cooling Requirements for Data Centers White Paper 25 Revision 3 by Neil Rasmussen > Executive summary This document describes how to estimate heat output from information technology (IT) equipment and other devices in a data center such as UPS, for purposes of sizing air conditioning systems. A number of common conversion factors and design guideline values are also included. Contents Click on a section to jump to it Introduction 2 Measuring heat output 2 Example of a typical system 4 Other heat sources 4 Humidification 5 Sizing air conditioning 5 Conclusion 6 Resources 7 white papers are now part of the Schneider Electric white paper library produced by Schneider Electric s Data Center Science Center DCSC@Schneider-Electric.com

Introduction All electrical equipment produces heat, which must be removed to prevent the equipment temperature from rising to an unacceptable level. Most information technology equipment and other equipment found in a data center or network room is air-cooled. Sizing a cooling system requires an understanding of the amount of heat produced by the equipment contained in the enclosed space, along with the heat produced by the other heat sources typically encountered. Measuring heat output Heat is energy and is commonly expressed in Joules, BTU, Tons, or Calories. Common measures of heat output rate for equipment are BTU per hour, Tons per day, and Joules per second (Joules per second is equal to Watts). There is no compelling reason why all of these different measures are used to express the same commodities, yet any and all of them might be used to express power or cooling capacities. The mixed use of these measures causes a great deal of senseless confusion for users and specifiers. Fortunately, there is a worldwide trend among standard-setting organizations to move all power and cooling capacity measurements to a common standard, the Watt. The archaic terms of BTU and Tons will be phased out over time.1 For this reason, this paper will discuss cooling and power capacities in Watts. The use of the Watt as the common standard is fortuitous, because it simplifies the work associated with data center design as will be explained later. In North America, specifications for power and cooling capability are still often provided in the legacy BTU and Tons terms. For this reason, the following conversions are provided to assist the reader: Table 1 Heat output conversion table Given a value in Multiply by To get BTU per hour 0.293 Watts Watts 3.41 BTU per hour Tons 3,530 Watts Watts 0.000283 Tons The power transmitted by computing or other information technology equipment through the data lines is negligible. Therefore, the power consumed from the AC power mains is essentially all converted to heat. This fact allows the thermal output of IT equipment in Watts to simply equal its power consumption in Watts. BTU per hour, as is sometimes provided in datasheets, is not necessary in determining the thermal output of equipment. The thermal output is simply the same as the power input 2. 1 The term Tons refers to the cooling capacity of ice and is a relic of the period from 1870-1930 when refrigeration and air conditioning capacity were provided by the daily delivery of ice blocks. 2 Note: the one exception to this rule is Voice over IP (VoIP) Routers; in these devices up to 30% of the power consumed by the device may be transmitted to remote terminals, so their heat load may be lower than the electrical power they consume. Assuming that the entire electrical power is dissipated locally as is assumed in this paper will give a small overstatement of heat output for VoIP routers, an insignificant error in most cases. Schneider Electric Data Center Science Center White Paper 25 Rev 3 2

Table 2 Data center or network room heat output calculation worksheet Determining heat output of a complete system The total heat output of a system is the sum of the heat outputs of the components. The complete system includes the IT equipment, plus other items such as UPS, power distribution, air conditioning units, lighting, and people. Fortunately, the heat output rates of these items can be easily determined via simple and standardized rules. The heat output of UPS and power distribution systems consists of a fixed loss and a loss proportional to operating power. These losses are sufficiently consistent across equipment brands and models and so they can be approximated without significant error. Lighting and people can also be readily estimated using standard values. The only information needed to determine the cooling load for the complete system are a few readily available values, such as the floor area in square feet, and the rated electrical system power. Air conditioning units create a significant amount of heat from fans and compressors. This heat is exhausted to the outside and does not create a thermal load inside the data center. It does, however, detract from the efficiency of the air conditioning system and is normally accounted for when the air conditioner is sized. A detailed thermal analysis using thermal output data for every item in the data center is possible, but a quick estimate using simple rules gives results that are within the typical margin of error of the more complicated analysis. The quick estimate also has the advantage that it can be performed by anyone without specialized knowledge or training. A worksheet that allows the rapid calculation of the heat load is provided in Table 2. Using the worksheet, it is possible to determine the total heat output of a data center quickly and reliably. The use of the worksheet is described in the procedure below Table 2. Item Data required Heat output calculation IT equipment Total IT load power in Watts Same as total IT load power in watts Heat output subtotal UPS with battery Power system rated power in Watts (0.04 x Power system rating) + (0.05 x Total IT load power) Power distribution Power system rated power in Watts (0.01 x Power system rating) + (0.02 x Total IT load power) Lighting Floor area in square feet, or Floor area in square meters 2.0 x floor area (sq ft), or 21.53 x floor area (sq m) People Max # of personnel in data center 100 x Max # of personnel Total Subtotals from above Sum of heat output subtotals Procedure Obtain the information required in the Data required column. Consult the data definitions below in case of questions. Perform the heat output calculations and put the results in the subtotal column. Add the subtotals to obtain the total heat output. Schneider Electric Data Center Science Center White Paper 25 Rev 3 3

Data definitions Total IT load power in Watts - The sum of the power inputs of all the IT equipment. Power system rated power - The power rating of the UPS system. If a redundant system is used, do not include the capacity of the redundant UPS. Example of a typical system Related resource White Paper 37 Avoiding Costs from Oversizing Data Center and Network Room Infrastructure The thermal output of a typical system is described. A 5,000 ft 2, (465 m 2 ) 250 kw rated data center with 150 racks and a maximum staff of 20 is used as an example. In the example, it is assumed that the data center is loaded to 30% of capacity, which is typical. For a discussion of typical utilization, see White Paper 37, Avoiding Costs from Oversizing Data Center and Network Room Infrastructure. The total IT load of the data center in this case would be 30% of 250 kw, or 75 kw. Under this condition, the total data center thermal output is105 kw, or approximately 50% more than the IT load. In the typical example, the relative contribution of the various types of items in the data center to the total thermal output is shown in Figure 1. Pwr Dist 4% Personnel 2% Lighting 10% Figure 1 Relative contributions to the total thermal output of a typical data center UPS 13% IT Loads 71% Note that the contributions to the thermal output of the UPS and the power distribution are amplified by the fact that the system is operating at only 30% of capacity. If the system was operating at 100% of capacity, the efficiency of the power systems would increase and their relative contributions to the thermal output of the system would decrease. The significant loss of efficiency is a real cost of oversizing a system. Other heat sources The prior analysis ignores sources of environmental heat such as sunlight through windows and heat conducted in from outside walls. Many small data centers and network rooms do not have walls or windows to the outside, so there is no error resulting from this assumption. However, for large data centers with walls or a roof exposed to the outdoors, additional heat enters the data center which must be removed by the air conditioning system. Schneider Electric Data Center Science Center White Paper 25 Rev 3 4

If the data room is located within the confines of an air-conditioned facility, the other heat sources may be ignored. If the data center has significant wall or ceiling exposure to the outside, then a HVAC consultant will need to assess the maximum thermal load and it must be added to the thermal requirement of the complete system determined in the previous section. Humidification In addition to removing heat, an air conditioner system for a data center is designed to control humidity. Ideally, when the desired humidity is attained, the system would operate with a constant amount of water in the air and there would be no need for ongoing humidification. Unfortunately, in most air conditioning systems the air-cooling function of the air conditioning system causes significant condensation of water vapor and consequent humidity loss. Therefore, supplemental humidification is required to maintain the desired humidity level. Supplemental humidification creates an additional heat load on the CRAC unit, effectively decreasing the cooling capacity of the unit and consequently requiring oversizing. For small data rooms or large wiring closets, an air conditioning system which isolates the bulk return air from the bulk supply air by using ducting can result in a situation where no condensation occurs and therefore no continuous supplemental humidification is needed. This allows 100% of the rated air conditioning capacity to be utilized and maximizes efficiency. For large data centers with high amounts of air mixing, the CRAC unit must deliver air at low temperatures to overcome the recirculation effects of the higher temperature equipment exhaust air. This results in substantial dehumidification of the air and creates the need for supplemental humidification. This causes a significant decrease in the performance and capacity of the air conditioning system. The result is that the CRAC system must be oversized up to 30% Related resource White Paper 58 Humidification Strategies for Data Centers and Network Rooms The required oversizing for a CRAC unit therefore ranges from 0% for a small system with ducted exhaust air return, to 30% for a system with high levels of mixing within the room. For more information on humidification see White Paper 58, Humidification Strategies for Data Centers and Network Rooms. Sizing air conditioning Once the cooling requirements are determined, it is possible to size an air conditioning system. The following factors, which were described earlier in this paper, must be considered: The size of the cooling load of the equipment (including power equipment) The size of the cooling load of the building Oversizing to account for humidification effects Oversizing to create redundancy Oversizing for future requirements The Watt loads of each of these factors can be summed to determine the total thermal load. Schneider Electric Data Center Science Center White Paper 25 Rev 3 5

Conclusion The determination of cooling requirements for IT systems can be reduced to a simple process that can be done by anyone without special training. Expressing all measures of power and cooling in Watts simplifies the process. A general rule is that a CRAC system rating must be 1.3 times the anticipated IT load rating plus any capacity added for redundancy. This approach works well with smaller network rooms of under 4,000 ft 2 (372 m 2 ). For larger data centers, the cooling requirements alone are typically not sufficient to select an air conditioner. Typically, the effects of other heat sources such as walls and roof, along with recirculation, are significant and must be examined for a particular installation. The design of the air handling ductwork or raised floor has a significant effect on the overall system performance, and also greatly affects the uniformity of temperature within the data center. The adoption of a simple, standardized, and modular air distribution system architecture, combined with the simple heat load estimation method described, could significantly reduce the engineering requirements for data center design. About the author Neil Rasmussen is a Senior VP of Innovation for Schneider Electric. He establishes the technology direction for the world s largest R&D budget devoted to power, cooling, and rack infrastructure for critical networks. Neil holds 19 patents related to high-efficiency and high-density data center power and cooling infrastructure, and has published over 50 white papers related to power and cooling systems, many published in more than 10 languages, most recently with a focus on the improvement of energy efficiency. He is an internationally recognized keynote speaker on the subject of highefficiency data centers. Neil is currently working to advance the science of high-efficiency, high-density, scalable data center infrastructure solutions and is a principal architect of the APC InfraStruXure system. Prior to founding APC in 1981, Neil received his bachelors and masters degrees from MIT in electrical engineering, where he did his thesis on the analysis of a 200MW power supply for a tokamak fusion reactor. From 1979 to 1981 he worked at MIT Lincoln Laboratories on flywheel energy storage systems and solar electric power systems. Schneider Electric Data Center Science Center White Paper 25 Rev 3 6

Resources Click on icon to link to resource Avoiding Costs from Oversizing Data Center and Network Room Infrastructure White Paper 37 Humidification Strategies for Data Centers and Network Rooms White Paper 58 White Paper Library whitepapers.apc.com TradeOff Tools tools.apc.com Contact us For feedback and comments about the content of this white paper: Data Center Science Center DCSC@Schneider-Electric.com If you are a customer and have questions specific to your data center project: Contact your Schneider Electric representative Schneider Electric Data Center Science Center White Paper 25 Rev 3 7