Airflow and Cooling Performance of Data Centers: Two Performance Metrics



Similar documents
Top-Level Energy and Environmental Dashboard for Data Center Monitoring

Rack Cooling Effectiveness in Data Centers and Telecom Central Offices: The Rack Cooling Index (RCI)

Energy Efficiency and Effective Equipment Cooling In Data Centers. Presentation at ASHRAE Summer Meeting Quebec City, Canada, June 27, 2006

Reducing the Annual Cost of a Telecommunications Data Center

DOE Data Center Air-Management (AM) Tool: Data Collection Guide. Version 1.18 (November 24, 2014)

Impacts of Perforated Tile Open Areas on Airflow Uniformity and Air Management Performance in a Modular Data Center

How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems

Energy Performance Optimization of Server Room HVAC System

Thermal Mass Availability for Cooling Data Centers during Power Shutdown

11 Top Tips for Energy-Efficient Data Center Design and Operation

Data center upgrade proposal. (phase one)

Enclosure and Airflow Management Solution

How To Improve Energy Efficiency Through Raising Inlet Temperatures

Managing Data Centre Heat Issues

How To Run A Data Center Efficiently

How Row-based Data Center Cooling Works

How To Improve Energy Efficiency In A Data Center

Improving Data Center Energy Efficiency Through Environmental Optimization

Benefits of. Air Flow Management. Data Center

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

Effect of Rack Server Population on Temperatures in Data Centers

Environmental Data Center Management and Monitoring

Using CFD for optimal thermal management and cooling design in data centers

Data Center Components Overview

FAC 2.1: Data Center Cooling Simulation. Jacob Harris, Application Engineer Yue Ma, Senior Product Manager

Power and Cooling for Ultra-High Density Racks and Blade Servers

Data center TCO; a comparison of high-density and low-density spaces

Sealing Gaps Under IT Racks: CFD Analysis Reveals Significant Savings Potential

Data Center Power Consumption

Fundamentals of CFD and Data Center Cooling Amir Radmehr, Ph.D. Innovative Research, Inc.

CFD-Based Operational Thermal Efficiency Improvement of a Production Data Center

Data Center Cooling & Air Flow Management. Arnold Murphy, CDCEP, CDCAP March 3, 2015

Implement Data Center Energy Saving Management with Green Data Center Environment and Energy Performance Indicators

Self-benchmarking Guide for Data Center Infrastructure: Metrics, Benchmarks, Actions Sponsored by:

Measure Server delta- T using AUDIT- BUDDY

Rate of Heating Analysis of Data Centers during Power Shutdown

Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc.

Control of Computer Room Air Handlers Using Wireless Sensors. Energy Efficient Data Center Demonstration Project

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

Choosing Close-Coupled IT Cooling Solutions

The New Data Center Cooling Paradigm The Tiered Approach

DataCenter 2020: first results for energy-optimization at existing data centers

IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT

Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED.

BRUNS-PAK Presents MARK S. EVANKO, Principal

Unique Airflow Visualization Techniques for the Design and Validation of Above-Plenum Data Center CFD Models

Ten Steps to Solving Cooling Problems Caused by High- Density Server Deployment

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001

A Comparative Study of Various High Density Data Center Cooling Technologies. A Thesis Presented. Kwok Wu. The Graduate School

Air, Fluid Flow, and Thermal Simulation of Data Centers with Autodesk Revit 2013 and Autodesk BIM 360

APC APPLICATION NOTE #112

Cooling Audit for Identifying Potential Cooling Problems in Data Centers

Benefits of Cold Aisle Containment During Cooling Failure

Managing Cooling Capacity & Redundancy In Data Centers Today

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Data Center Cooling: Fend Off The Phantom Meltdown Of Mass Destruction. 670 Deer Road n Cherry Hill, NJ n n

Data Centers: How Does It Affect My Building s Energy Use and What Can I Do?

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

White Paper. Data Center Containment Cooling Strategies. Abstract WHITE PAPER EC9001. Geist Updated August 2010

Creating Data Center Efficiencies Using Closed-Loop Design Brent Goren, Data Center Consultant

APC APPLICATION NOTE #92

DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences

Data Center Temperature Rise During a Cooling System Outage

Using Computational Fluid Dynamics (CFD) for improving cooling system efficiency for Data centers

The Benefits of Supply Air Temperature Control in the Data Centre

Statement Of Work Professional Services

Data Center Temperature Rise During a Cooling System Outage

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Prediction Is Better Than Cure CFD Simulation For Data Center Operation.

Modeling Rack and Server Heat Capacity in a Physics Based Dynamic CFD Model of Data Centers. Sami Alkharabsheh, Bahgat Sammakia 10/28/2013

Specialty Environment Design Mission Critical Facilities

Data Centre/Server Room Containment: How to Get More Cooling Capacity without Adding Additional AC

Virtual Data Centre Design A blueprint for success

Using Simulation to Improve Data Center Efficiency

Data Centre Cooling Air Performance Metrics

Data Center Precision Cooling: The Need For A Higher Level Of Service Expertise. A White Paper from the Experts in Business-Critical Continuity

Computational Fluid Dynamic Investigation of Liquid Rack Cooling in Data Centres

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers

Airflow Simulation Solves Data Centre Cooling Problem

Strategies for Deploying Blade Servers in Existing Data Centers

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

Reducing Data Center Energy Consumption

Data Center Equipment Power Trends

Enabling an agile Data Centre in a (Fr)agile market

Analysis of the UNH Data Center Using CFD Modeling

Transcription:

2008, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions, Vol. 114, Part 2. For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE s prior written permission. SL-08-018 Airflow and Cooling Performance of Data Centers: Two Performance Metrics Magnus K. Herrlin, PhD Member ASHRAE ABSTRACT This paper focuses on the use of performance metrics for analyzing air-management systems in data centers. Such systems are important for adequately cooling the electronic equipment and controlling the associated energy costs. Computational Fluid Dynamics (CFD) modeling has the capacity to help understand how a cooling solution will perform prior to being built. However, modeling also has the capacity to generate an unwieldy amount of data. The crux of the matter is to know what to look for and then objectively characterize and report the performance. Performance metrics provide a great opportunity for the data center industry at large. They could form the foundation for a standardized way of specifying and reporting various cooling solutions. This paper specifically demonstrates the use of two metrics: The Rack Cooling Index (RCI) is a measure of how well the system cools the electronics within the manufacturers specifications, and the Return Temperature Index (RTI) is a measure of the energy performance of the air-management system. Combined, they provide an opportunity to judge the performance of the air-management system in an objective way subsequent to comprehensive CFD modeling. A real-world example is given for demonstrating the use of these metrics to design a high-density data center. The analysis was designed to answer whether a fairly conventional raised-floor system could support significantly higher heat densities than was previously thought. By enclosing the cold equipment aisles, it is demonstrated that the cooling capacity can indeed be greatly increased. This should provide a welcome relief for many data centers that are currently running out of capacity due to a low raised floor. INTRODUCTION The main task for a data center facility is to provide an adequate equipment environment, and a relevant metric for equipment intake temperatures should be used to gauge the thermal environment. The Rack Cooling Index (RCI) is a measure of how effectively equipment racks are cooled and maintained within industry temperature guidelines and standards (Herrlin 2005). The Index is designed to help evaluate the equipment room health for managing existing environments or designing new ones. It is also well suited as a design specification for new data centers. Herrlin and Belady (2006) demonstrated the use of the RCI for evaluating overhead and under-floor air distribution in data centers. There has been a continued interest in understanding the behavior of various cooling technologies in data centers. One issue is whether gravity plays a role in highdensity environments with open room architectures. CFD modeling in conjunction with the Rack Cooling Index (RCI) was used to demonstrate that gravity-assisted air mixing is indeed central to creating an adequate and forgiving thermal equipment environment. The Return Temperature Index (RTI) was introduced by Herrlin (2007) as a measure of the level of by-pass air or recirculation air in the equipment room. Both effects are detrimental to the overall thermal and energy performance of the space. By-pass air does not participate in cooling the electronic equipment and depresses the return air temperature. Recirculation, on the other hand, is one of the main reasons for hot spots or areas significantly hotter than the average space temperature. Since the costs/savings associated with improving the RCI needs to be known, the concept of cost functions was Magnus K. Herrlin is president at ANCIS Incorporated, San Francisco, CA. 182 2008 ASHRAE

developed based on costs to condition the equipment space (Herrlin and Khankari, 2008). By combining the Rack Cooling Index (RCI) with cost functions, current or planned equipment room designs can be better evaluated. Combining the RCI with cost functions provides comprehensive design information for the data center owner and/or consultant. The present paper takes the analysis one step further. Given that air distribution from a raised floor is by far the most common way of cooling data centers, this air-distribution scheme is used to demonstrate the usage of the RCI combined with the RTI. Both metrics are computed with Computational Fluid Dynamics (CFD) modeling to evaluate the thermal equipment environment for different heat densities and room architectures for a fixed raised-floor height. Others have suggested related metrics; for example, the Supply Heat Index (SHI) and the Return Heat Index (RHI) (Sharma, et al, 2002). These two dimensionless indices not only provide a tool to understand convective heat transfer in the equipment room but also suggest means to improve the energy efficiency. VanGilder and Shrivastava (2007) introduced the dimensionless Capture Index (CI), which is a cooling performance metric at the equipment rack level. The CI is also typically computed by CFD modeling. Finally, although all stakeholders would benefit from using performance metrics such as the RCI and RTI, the data center owner and/or operator would maybe benefit the most. There is now the opportunity for them to specify a certain level of performance of the air-management system. The designer or contractor would also be able to demonstrate the performance of different design permutations and the final design. And, the cooling equipment manufacturers have another tool to refine their products. DESIGN OF DATA CENTER The design of a high-density data center is used to demonstrate the benefits of using the RCI and RTI. A number of CFD simulations were performed to quantify the possibility to substantially enhance the performance of a fairly conventional four-foot raised floor design for a new supercomputer center facility. It was desirable to avoid a two-story design that sometimes is used for extreme heat densities. The comparison involves four heat densities (5kW, 10kW, 20kW, and 30kW per rack) and three room architectures (open, semi-enclosed, and enclosed). The highest density corresponds to more than 1000W/ft 2. The open or traditional architecture does not use physical barriers to enhance the separation of hot and cold air in the equipment room. The semienclosed architecture uses doors at the end of the equipment rows to help contain the cold air in the cold aisles and avoid detrimental air mixing. Finally, the enclosed architecture separates the cold aisle from the rest of the equipment room by barriers that enclose the entire aisle. Air mixing can thereby be kept to a minimum. Figure 1 shows isometric views of the three room architectures and corresponding temperature maps of the intake temperatures of one lineup as computed by CFD modeling. This limited sub-domain was selected for the parametric analysis; the rest of the equipment space is a repetition of these two lineups. The 10,000 ft 2 data center is cooled by air-handlers that provide the required airflow into the raised-floor plenum from galleries outside the room. The supply temperature is 65 F, and the supply airflow is 100% of the total rack airflow (the volume that the electronic equipment draws). Perforated floor tiles (grates) deliver the air from the raised-floor plenum into the cold aisles in front of the electronic equipment. It can be seen from the temperature maps that the aisle doors limit the outflow of cold air into the equipment room whereas the enclosure essentially isolates the cold aisle. The characteristic wave form of cold air seen in the first two architectures is due to poor pressure distribution in the floor plenum and in turn poor airflow distribution through the perforated floor tiles. The result is significantly elevated intake temperatures at the bottom of the wave. Figure 1 Comparison of three room architectures (open, semi-enclosed, and enclosed) and corresponding intake temperature maps along one equipment lineup at 20 kw/rack. ASHRAE Transactions 183

Figure 2 Hypothetical intake temperature distribution and graphical representation of the RCI HI. Almost every air-management design process includes analyzing permutations to arrive at the final design, including that of designing the supercomputer center. But how can the permutations readily be communicated to provide the foundation for finding the optimal design? One of the strong features of CFD modeling is also one of the weak ones; namely, the wealth of available output. The RCI and RTI metrics were deployed in this particular effort to condense the output. The rationale and definition of these metrics are described below. Crafting a metric is a delicate balance between simplicity and accuracy. There is always a temptation to make an index too complicated to capture everything. That approach generally defeats the purpose. Effective metrics often are strikingly simple. RACK COOLING INDEX (RCI) TM The following is a brief overview of the Rack Cooling Index (RCI) to provide the necessary understanding how to interpret the index. For a complete description of the RCI, the reader is referred to the original work by Herrlin (2005) as published by ASHRAE. The index deals with rack intake temperatures the conditions that air-cooled equipment depend on for its continuous operation. The allowable intake temperature limits in Figure 2 represent the equipment design conditions whereas the recommended limits refer to preferred facility operation. Over-temperature conditions exist once one or more intake temperatures exceed the maximum recommended temperature. The total over-temperature represents a summation of over-temperatures across all rack inlets. Similarly, undertemperature conditions exist when intake temperatures drop below the minimum recommended. The numerical values of these limits depend on the applied guideline (e.g., ASHRAE 2004) or standard (e.g., Telcordia 2001, 2002). Compliance with a given intake temperature specification is the ultimate cooling performance metric; the RCI is such a Table 1. Rating of the RCI Rating RCI Ideal 100% Good 96% Acceptable 91-95% Poor 90% metric. RCI=100% mean ideal conditions; no over- or undertemperatures, all temperatures are within the recommended temperature range. ASHRAE (2004) Class 1 data center environment is used for the RCI calculations in this example. In the ASHRAE specification, the recommended equipment intake temperature range is 68 77 F (20 25 C) and the allowable range is 59 90 F (15 32 C). Table 1 shows an approximate rating of the RCI based on numerous analyses. The RCI has two parts, describing the equipment room health at the high (HI) end and at the low (LO) end of the temperature range, respectively. Figure 2 provides a graphical representation of the RCI HI. An analogous Index is defined for temperature conditions at the low end of the temperature range, RCI LO. The RCI HI definition is as follows: Total Over-Temp RCI HI = 1 ------------------------------------------------------------------ 100 [%] Max Allowable Over-Temp The hands-on interpretation of the Index is as follows: RCI HI = 100% All intake temperatures max recommended temperature RCI HI < 100% At least one intake temperature > max recommended temperature (1) 184 ASHRAE Transactions

RCI HI < 0% At least one intake temperature > max allowable temperature The RCI HI is a measure of the absence of over-temperatures; 100% means that no over-temperatures exist (ideal). The lower the percentage, the greater probability (risk) that equipment experiences temperatures above the maximum allowable temperature. A warning flag * appended to the index indicates that one or several intake temperatures are outside the allowable range. The index for the hypothetical intake temperature distribution shown in Figure 2 is approximately RCI HI = 95%*. CFD modeling provides a convenient tool for computing all intake temperatures that are necessary for calculating the index. Based on such modeling, Figure 3 shows the example results as measured by the RCI HI. Presenting the results in this compact format provides an opportunity to detect trends that otherwise might have been lost in the large data stream. Every designer has a metric in mind whenever he/she is judging whether a design is adequate. The challenge of such unwritten metrics is that there are probably as many metrics as there are designers. The RCI, on the other hand, provides and unbiased and objective way of quantifying the quality of an air management design from a thermal perspective. Compared to the open traditional architecture, the semienclosed architecture improves the thermal conditions, that is, higher RCI HI. The enclosed design, however, provides a truly striking boost to the cooling capacity of the raised floor. Even at the 30kW per rack level, the RCI HI is elevated to 100% (ideal). The implications are great. Many data centers that are running out of capacity due to limited raised-floor heights can be reconfigured with cold-aisle enclosures to allow very significant heat densities. This should provide a welcome relief for many data centers. Finally, it should be noted that the enclosed architecture may be more vulnerable to catastrophic cooling outages, and engineered thermal solutions are often required to safeguard the servers. RETURN TEMPERATURE INDEX (RTI) TM In the case of the open or semi-enclosed architecture, the RCI HI can be improved by increasing the supply airflow rate and/or lowering the supply air temperature. However, these measures are associated with an energy penalty. The Return Temperature Index (RTI) is a measure of the energy performance of the air-management system. Deviations from 100% are an indication of declining performance. The index is defined as follows (Herrlin, 2007): where T Return T Supply RTI = [(T Return T Supply ) / ΔT Equip ] 100 [%] (2) = return air temperature (weighted average) = supply air temperature (weighted average) ΔT Equip = temperature rise across the electronic equipment (weighted average) Figure 3 RCI HI as a function of heat density and room architecture (connecting straight lines are not necessarily a physical representation). ASHRAE Transactions 185

Since the temperature rise across the electronic equipment provides the potential for high return temperatures, it makes sense to normalize the RTI with regard to this entity. In other words, the RTI provides a measure of the actual utilization of the available temperature differential. Consequently, a low return air temperature is not necessarily a sign of poor air management. If the electronic equipment only provides a modest temperature rise, the return air temperature cannot be expected to be high. Many legacy servers and other electronic systems have a temperature rise of only 10 F whereas new blade servers can have a temperature differential of 50 F. The interpretation of the index is straight forward (Table 2): Deviations from 100% are an indication of declining performance. A number above 100% (over-utilization) suggests mainly air recirculation, which elevates the return air temperature. Unfortunately, this also means elevated equipment intake temperatures and poor RCI values. A number below 100% (under-utilization) suggests mainly air by-pass; cold air by-passes the electronic equipment and is returned directly to the air handler, reducing the return temperature and the energy performance. This often happens when the supply flow is increased to combat hotspots. Table 2. Rating of the RTI Rating RTI Target 100% Recirculation >100% By-Pass <100% Note that there might be a number of legitimate reasons to operate below or above 100%. For example, some air-distribution schemes are designed to provide a certain level of air mixing (recirculation) to provide an even equipment intake temperature. Some overhead air-distribution systems operate this way. Finally, Figure 4 shows the air-management and energy penalty as expressed by the RTI of taking the 5 and 10 kw racks to an RCI HI of 100% by increasing the supply airflow rate. At higher densities, it is difficult to combat poor RCI values by increasing the supply airflow rate alone; a lower supply temperature is also generally required. As expected, the RTI (i.e., the energy performance) declines from 100% for the open and semi-enclosed architectures; the reduction is inversely proportional to the necessary increase in airflow. Since the RTI is below 100%, some air by-passes the electronic equipment. This demonstrates that an RCI analysis should be accompanied by an energy analysis; in this case by using the RTI. Once again, the RCI and RTI metrics provide yardsticks for ranking design permutations. Each user, however, needs to determine the proper balance between the rack cooling effectiveness, energy performance, and other considerations. DISCUSSION This paper describes two performance metrics for analyzing air-management systems in data centers. Given that airdistribution from a raised floor is the most common way of cooling data centers, this scheme was used to demonstrate the use and benefits of the metrics. The Rack Cooling Index (RCI) was computed with Computational Fluid Dynamics (CFD) modeling to evaluate the thermal equipment environment in a Figure 4 Deterioration of the RTI and the energy performance by taking the RCI HI to 100% (by lifting the 5 and 10 kw line in Figure 3 to RCI HI = 100%). 186 ASHRAE Transactions

new supercomputer center at four different heat densities and three room architectures. The RCI is a measure of how effectively equipment racks are cooled and maintained within a given temperature specification. By utilizing the RCI, it was shown how the cooling capacity of the raised-floor system can be significantly boosted by enclosing the cold equipment aisles. The capacity limitation with raised floors has to do with a poor pressure distribution in the floor plenum at high air velocities; poor pressure distribution leads to poor airflow distribution through the perforated floor tiles. By enclosing the cold aisles, however, the airflow distribution becomes less important and the airflow can be increased to levels that would have been prohibitively high in conventional open room architectures. However, it is imperative that the floor plenum is both designed and maintained for accommodating high airflow rates. The Return Temperature Index (RTI) provided information for improving the conventional open room architecture by increasing the airflows. The RTI is a measure of the performance of the air-management system and how well it controls by-pass air or recirculation air. Since improving the RCI HI can lead to an energy penalty, the RTI can help evaluate how severe such a penalty may be. Indeed, it was demonstrated that the RTI was significantly reduced by bringing the RCI into absolute compliance (RCI HI =100%). Advanced Computational Fluid Dynamics (CFD) modeling applied to data center analysis can be enhanced by using metrics such as those outlined in this paper. The data center designer not only needs advanced analytical tools but he/she needs practical tools to make more informed decisions. Well crafted metrics will undoubtedly play an important role. REFERENCES ASHRAE. 2004. Special Publication, Thermal Guidelines for Data Processing Environments. American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc., Atlanta, GA. Herrlin, M. K. and Khankari, K. 2008. Method for optimizing equipment cooling effectiveness and HVAC cooling costs in telecom and data centers. ASHRAE Transactions, Volume 114, Part 1, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc., Atlanta, GA. Herrlin, M. K. 2007. Improved Data Center Energy Efficiency and Thermal Performance by Advanced Airflow Analysis. Digital Power Forum, 2007. San Francisco, CA, September 10-12, 2007. Herrlin, M. K. and Belady, C. 2006. Gravity-Assisted Air Mixing in Data Centers and How it Affects the Rack Cooling Effectiveness. ITherm 2006, San Diego, CA, May 30 June 2, 2006. Herrlin, M. K. 2005. Rack Cooling Effectiveness in Data Centers and Telecom Central Offices: The Rack Cooling Index (RCI). ASHRAE Transactions, Volume 111, Part 2, American Society of Heating, Refrigerating and Air- Conditioning Engineers, Inc., Atlanta, GA. Sharma, R. K., C. E. Bash, and C. D. Patel. 2002. Dimensionless Parameters for Evaluation of Thermal Design and Performance of Large-Scale Data Centers. American Institute of Aeronautics and Astronautics, AIAA-2002-3091. Telcordia. 2002. Generic Requirements NEBS GR-63- CORE, NEBS Requirements: Physical Protection, Issue 2, April 2002, Telcordia Technologies, Inc., Piscataway, NJ. Telcordia. 2001. Generic Requirements NEBS GR-3028- CORE, Thermal Management in Telecommunications Central Offices, Issue 1, December 2001, Telcordia Technologies, Inc., Piscataway, NJ. VanGilder, J.W. and Shrivastava, S.K. 2007. Capture Index: An Airflow-Based Rack Cooling Performance Metric. ASHRAE Transactions, Vol. 113, Part 1, pp.126-136. American Society of Heating, Refrigerating and Air- Conditioning Engineers, Inc., Atlanta, GA. DISCUSSION Mike Scofield, President, CMS Co., Sebastopol, CA: What is the recommended cold-aisle relative humidity, and why? Magnus K. Herrlin: My presentation did not touch on relative humidity but rather dry-bulb temperature. Generally speaking, however, the acceptable equipment intake relative humidity range depends on the application. Too high humidity may lead to hygroscopic dust failures and too low humidity may lead to electrostatic discharge failures. Rack Cooling Index (RCI) is a Registered Trademark and Return Temperature Index (RTI) is a Trademark of ANCIS Incorporated (www.ancis.us) ASHRAE Transactions 187