Second Place: Industrial Facilities or Processes, Existing The TCS data center makes use of white cabinets and reflective surfaces as one method to reduce power for its lights and cooling. Data Center Dilemma By Jeff Sloan, P.E., Member ASHRAE TeleCommunications Systems (TCS) faced a difficult choice in late 2009. Its research and production IT systems had expanded to fill 3,000 ft 2 (279 m 2 ) of developed data center space in Seattle s World Trade Center Building (2401 Elliott Avenue), and it was still growing. TCS wanted to create a larger data center in the same building, but this building didn t seem to have adequate remaining power in its electrical substation, and it lacked space for the additional generators and chillers that a conventional data center would require. About the Author Jeff Sloan, P.E., is a design manager at McKinstry Co., in Seattle. He is a member of ASHRAE s Puget Sound chapter. 62 ASHRAE Journal ashrae.org March 2013
2013 ASHRAE Technology Award Case Studies This article was published in ASHRAE Journal, March 2013. Copyright 2013 ASHRAE. Posted at www.ashrae.org. This article may not be copied and/or distributed electronically or in paper form without permission of ASHRAE. For more information about ASHRAE Journal, visit www.ashrae.org. TCS s older data center systems required almost as much electrical power for the cooling equipment and for critical power processing as was needed for the computers themselves. Data center designers characterize a given data center s power utilization effectiveness as its PUE ratio, defined as the incoming (total) power wattage divided by the useful (process) power wattage. A plan to accommodate 400 kw of ac and dc equipment with a 1.8 PUE (similar to their current data center) would require 720 kw of incoming power. That much capacity wasn t available within the building s electrical distribution, and additional generator space would be needed. TCS had been tolerating other problems in their existing data center space; poor air distribution caused some of its equipment to overheat unless the cooling system was adjusted to maintain low temperatures. The poor air distribution in the older data center can be seen in the thermograph in Figure 1, where some equipment is drawing in air that is warmer than the room setpoint, despite the refrigerated supply air coming from the visible ceiling outlet. In this picture, the equipment is mounted in open racks and aisles that present many opportunities for cooling air to recirculate; warm air can leave some servers and then be pulled into other servers without returning to the HVAC equipment first. The cold supply air temperatures and high humidity setpoints in the older data center were causing some HVAC equipment to humidify, and other HVAC equipment to simultaneously dehumidify, wasting energy and wasting water. Design The design proposed to overcome these difficulties was styled after a successful data center project that had been built in Moses Lake, Wash., in 2006 by the same design-build team. The Moses Lake project has reliably operated without refrigeration through summer temperatures of more than 100 F (38 C) because its design solved the cooling air distribution problem shown in Figure 1 and by doing so, achieved a year-round PUE of 1.20. A design with a PUE that low would allow TCS to install the desired 400 kw of Photo 1: Seattle s World Trade Center building (center) showing proximity to saltwater (lower left) and train tracks. IT equipment without modifications to the building s electrical service or the need for a larger generator. Once this concept was explained to TCS, they chose to place their servers and other IT equipment in closed chimney cabinets (Figure 2) instead of in open racks. The chimney cabinets were placed in a 15 ft (5 m) clear-height space with a 10.5 ft (3.2 m) high suspended T- bar ceiling to form a return air plenum above the ceiling. The chimney-style cabinets are available from a variety of manufacturers and are designed to convey all the warm air produced by the equipment they contain directly into the return air plenum, without opportunities for recirculation. The spaces in the chimney cabinets that are not occupied by equipment should be blanked off so cooling air can only enter each cabinet through a server. The thermograph in Figure 2 shows how uniform air temperatures arrive at each of the (dark) installed servers, despite the (warm) blanks filling the remainder of the cabinets. Without hot air recirculation, the HVAC equipment serving the new TCS data center space only needs to make 75 F (24 C) supply air, a condition that can be produced year-round in the Pacific Northwest without refrigeration by using direct evaporative cooling only. By using power only for fans and not a chiller, the HVAC system requires only 10% of the generator s capacity. With this chillerless cooling system, along with efficient UPS and dc power processing equipment, TCS s new data center operates with a PUE of only 1.15, year-round. With a PUE that low, the project qualified for utility conservation incentives that helped to offset the cost of the more expensive chimney cabinets. TCS now enjoys a very low HVAC operating cost, compared to other potential data center HVAC systems. For example, a typical similarly sized setup in Seattle of chilled water computer Building at a Glance TCS Data Center Location: 2401 Elliott Ave., Seattle Owner: TeleCommunications Systems, Inc. Principal use: Software support for emergency first-responders and mission-critical enterprises Includes: ac and dc telecommunications equipment, UPS power, standby generator, efficient cooling Employees/Occupants: The data center is routinely unoccupied. Two staff operators maintain equipment. Gross square footage: 4,360 Conditioned space: 3,360 ft² Substantial Completion/Occupancy: December 2010 Capacity: Facility is operating at just over 50% of design power capacity March 2013 ASHRAE Journal 63
2013 Technology Award Case Studies Figure 1a (left): Older data center with open racks and aisles. Figure: 1b (right): Thermograph of older data center shows hot air reentering servers despite cold surroundings. Figure 2a (left): New data center shows partially populated chimney cabinets. Figure: 2b (right): Thermograph of new data center shows cool air entering servers despite warm surroundings. room air-handling (CRAH) units with water economizer in an open aisle configuration costs $349,414 to operate per year. TCS only spends $309,807 with the savings mostly coming from the HVAC electricity costs. This cooling design requires no power and no piping to heat-rejection equipment outdoors or on the roof, and without any heat rejection equipment the noise impacts to neighbors are very low. Innovation, Energy Savings The mechanical system consists of three custom indoor air handlers and four relief fans. These quantities include a redundant supply air handler and a redundant relief fan for concurrent maintainability. Because the cooling system installed for TCS must always pull in outdoor air to cool the data center, the design needed to overcome some air quality challenges. The building s location is close to Seattle s salt-water coastline, and the project s outdoor air intakes are directly above railroad tracks where freight trains frequently queue to enter a narrow tunnel under the city. Effective filtration (MERV 8 and MERV 13) was provided in the air handlers to exclude salt and diesel particles from the data center. A pre-action fire protection system was installed to balance the risks presented to the computer equipment by fire and by water from overhead fire sprinklers; the pre-action valve is triggered by an early warning aspirating smoke detection system. However, TCS needed to devise a method to ensure the system could differentiate harmless filtered locomotive smoke from the dangerous smoke of a potential fire within the data center space. This system includes an outdoor reference detector to measure the concentration of smoke drawn through the outdoor air intakes and compare its particle size distribution with any smoke detected by the indoor detectors. The outdoor reference head was installed early (during construction) to give it time to learn what particle size distribution corresponded to typical locomotive smoke. The International Mechanical Code requires many air-handling systems to be equipped with automatic smoke shutoffs. This system qualifies for an exception to the requirement for automatic shutoff because it only serves one room and cannot spread smoke into surrounding spaces or into the building s exit corridors. In this way, the system operates reliably to cool the equipment regardless of outdoor air conditions. The TCS data center cooling system uses variable volume supply and relief fans (Figure 3). The supply fans track the IT 64 ASHRAE Journal ashrae.org March 2013
Advertisement formerly in this space.
2013 Technology Award Case Studies Relief Fans (4) Outside Air Return Air Through AHU Mixing Section Filtration Direct Drive Fans 95 F Return Air Direct Evaporative Media Lights Supply Air Air Out of Chimney 95 F Return Air Server Cable/Fiber Tray 75 F Air Air Handler (3) 75 F and Humidified Server Cabinet 75 F Air Figure 3: One air handler, one cabinet and one relief fan, showing how cooling supply air is kept separated from warm relief air. equipment s air consumption (the chimney airflow) precisely by comparing the static pressure below the ceiling, among the cabinets, to an outdoor pressure reference. If the computer fans change their airflow, or the amount of equipment installed changes, the supply fans change their speed to match the airflow traveling up the cabinet chimneys in real time. Whenever the air handlers modulate their mixing dampers to control the supply air temperature, the relief fans must change their airflow to match the incoming outdoor airflow. This fluctuation in fan power is visible in the recorded trend (blue) of Figure 4. To make this happen automatically, the relief fan airflow is simply varied to maintain the differential pressure across the ceiling at a near-zero value. The result is each server in the room has a negligible, and never adverse, airflow resistance to overcome just as it would experience if it were located in an open rack. The relief fans are staged as their speed is varied, to keep the fans and motors at their most efficient point of operation. Through experimentation, the air-handling unit supply fan operation has been optimized. Figure 4 shows system power use as commissioned, with a PUE of 1.22 while the room was approaching 50% populated. Initially, only two air handlers operated, and the redundant air handler was turned off. TCS then began operating with all three air handlers fans turning but at a slower speed than before. This lowered total fan power by reducing the filter velocity and brought the PUE down to 1.17, but the fans were rotating at only 25 Hz. In October 2012, we blanked off one-third of the fans in each air handler (while still operating all three air handlers) to bring the remaining fans speed up to a more efficient 40 Hz and the PUE dropped further, to 1.15 (Figure 5). This manual method of staging the supply fans is a reasonable way to pursue efficiency in a data center because the supply airflow requirement changes slowly, along with the populated state of the room. Some manufacturers of custom air handlers can perform this internal fan staging automatically. With each adjustment, we have made sure enough fan capacity remains to allow the active fans to speed up should there be an air handler failure requiring the redundant capacity to operate. kw 250 200 150 100 50 0 IT Power Cooled Outside Air Temperature Power to Fans 9/1/2011 12a 9/2 11a 9/3 11p 9/5 9a 9/7 1:30p 9/9 12:30a 9/10 11:30a 9/11 10:30p 9/13 9:30a 9/14 8:30p 9/16 7:30a 9/17 6:30p 9/19 5:30a 9/20 4:30p 9/22 3:30a 9/23 2:30p 9/25 1:30a 9/26 12:30p Figure 4: TCS actual power trends (before described fan optimization). Designers of data center cooling systems should carefully consider the potential for the data center to operate at less-thanfully populated conditions for indeterminate durations. Many data centers operate lightly populated. The equipment heat can increase and decrease with little warning. Figure 6 shows how the annual operating costs of the compared cooling systems would vary across the range of population states. Washington State has a fairly stringent energy code, so a code-compliant cooling system for TCS would need to perform as efficiently as a system with full air economizer, adiabatic humidifier and constant volume fans. The data in Figure 6 shows how (for this year-round cooling application) an adiabatic humidifier (orange line) outperforms an electric-steam generating humidifier (red line). The graph also 100 90 80 70 60 50 40 Temperature ( F) 66 ASHRAE Journal ashrae.org March 2013
2013 Technology Award Case Studies shows how the annual cost of the mechanical system described here (green line) would change without the separation of cold and warm air provided by the chimney cabinets (orange line). Significantly, the consistent distance between the green and orange curves shows how the system described here provides energy savings using VAV fan airflow reduction when the room is lightly populated and by refrigeration avoidance when the room is more fully populated (when compared to the stringent Washington State energy code baseline). This assured savings characteristic removes the uncertainty around how populated the system might actually turn out to be, and helped us make the case for the conservation incentive with our electric utility. Different methods may be used to modulate evaporative media. We considered staged control (either parallel or series banks of separately wetted media) before determining that the face and bypass method would provide the most steady conditions in the space and consume the least amount of water (the staged $70,000 method can frequently overshoot the setpoint and put too $60,000 much moisture into the room). $50,000 The face and bypass dampers modulate to prop up the $40,000 indoor humidity in the winter $30,000 and to cool the discharge air in the summer. The humidity $20,000 setpoint is reset seasonally. A $10,000 low relative humidity avoids structural condensation and saves water in the wintertime, but a higher relative humidity is necessary in the summer when the evaporative cooling of 100% outdoor air occurs. When no water evaporation is necessary at all, face and bypass dampers are opened to save fan power and, after a time delay, the water is drained from the evaporative sump until evaporation is required again. The system uses a nonchemical water treatment method and provides conductivity control of blow-down within each supply air handler. Annual Operating Cost $0 0 10 20 30 40 50 60 70 80 90 100 Percent Populated HVAC Power (Supply + Relief Fans) (kw) 36 32 28 24 20 16 12 8 4 Before 10/12; Prior to Fan Optimization PUE of 1.22 (After Accounting for UPS Power) After 10/12; After Fan Optimization PUE of 1.15 (After Accounting for UPS Power) 0 120 140 160 180 200 220 240 260 Cooling Load (Power Out of IT Equipment) (kw) Figure 5: TCS actual power observations before and after described fan optimization. Annual Cost of Energy for Servers Open Aisles & Chilled Water CRAH Units With No Economizer (Not Permitted by Seattle Energy Code) Open Aisles & Chilled Water CRAH Units with Water Economizer (Permitted by Seattle Energy Code) Open Aisles & Chilled Water Air Handlers with Air Economizer, Steam Humidifier (Not Permitted by Seattle Energy Code) Open Aisles & Air Handlers with Adiabatic Humidifier, Air Economizer (Seattle Energy Code Baseline) Chimney Cabinets & Air Handler with Adiabatic Humidifier, Air Economizer (Used at TCS) Figure 6: Annual calculated cost comparison at various populated percentages (Seattle 400 kw computer room with 77 F 25% to 70% RH cold aisles.) Costs include water, electricity and maintenance. Even if a data center chooses to use a water economizer, the separation of hot and cold air can reduce the chiller demand and annual refrigeration energy use significantly. Additionally, this separation and airflow modulation prevents hot spots, provides fault-tolerant air distribution in the room; and lightly loaded data centers can reduce their fan speed and energy use significantly. Conclusion With a design wet-bulb temperature below 70 F (21 C), there are lots of opportunities in the Pacific Northwest for data centers built like TCS to enjoy a low PUE without any refrigeration. However, anywhere a data center is constructed, or can be retrofit with a means of separation between the cool supply air and the warm return air (as TCS was able to do using chimney cabinets), there will be energy savings and a reduction in the need for refrigeration. Advertisement formerly in this space. March 2013 ASHRAE Journal 67