1 Control of Computer Room Air Conditioning using Sensors in the IT Equipment Energy Efficient Data Center Demonstration Project
2 About the Energy Efficient Data Center Demonstration Project The project s goal is to identify key technology, policy and implementation experts and partners to engage in creating a series of demonstration projects that show emerging technologies and best available energy efficiency technologies and practices associated with operating, equipping and constructing data centers. The project aimed to identify-demonstrations for each of the three main categories that impact data center energy-utilization: operation & capital efficiency equipment (server, storage & networking equipment) data center design & construction (power distribution & transformation, cooling-systems, configuration, and energy sources, etc.). The project also identified member organizations that have retrofitted existing data centers and/or built new ones where some or all of these practices and technologies are being incorporated into their designs, construction and operations. About The Silicon Valley Leadership Group (SVLG) The SVLG comprises principal officers and senior managers of member companies who work with local, regional, state, and federal government officials to address major public policy issues affecting the economic health and quality of life in Silicon Valley. The Leadership Group s vision is to ensure the economic health and a high quality of life in Silicon Valley for its entire community by advocating for adequate affordable housing, comprehensive regional transportation, reliable energy, a quality K-12 and higher education system, a prepared workforce, a sustainable environment, affordable and available health care, and business and tax policies that keep California and Silicon Valley competitive. Silicon Valley Leadership Group 224 Airport Parkway, Suite 620 San Jose, CA svlg.net 2009 Silicon Valley Leadership Group
3 1 Control of Computer Room Air Conditioning using Sensors in the IT Equipment This goal of this demonstration was to show how sensors in IT equipment could be accessed and used to directly control computer room air conditioning. The data provided from the sensors is available on the IT network and the challenge for this project was to connect this information to the computer room air handler s control system. A control strategy was developed to enable separate control of the chilled water flow and the fans in the computer room air handlers. By using these existing sensors in the IT equipment, an additional control system is eliminated (or could be redundant) and optimal cooling can be provided saving significant energy. Intel hosted the demonstration in its Santa Clara, CA data center. Intel collaborated with IBM, HP, Emerson, Wunderlich-Malec Engineers, FieldServer Technologies, and LBNL to install the necessary components and develop the new control scheme. LBNL also validated the results of the demonstration. Project Case Data center cooling is usually provided with computer room air conditioner (CRAC) devices, with refrigeration direct expansion cooling coils, or computer room air handler (CRAH) devices, with chilled water coils. Typically, these devices use return air temperature sensors as the primary controlvariable to adjust the air temperature supplied to the data center. This control approach significantly limits energy efficiency because it is the worst location for maintaining temperatures at the inlet to IT equipment. Importantly, server manufacturers have agreed that their main operational parameter is the air temperature provided at the inlet of the server itself, not the proxy temperature returning to the cooling device. Therefore, a much higher degree of monitoring and control would be achieved by using front-panel, inlet temperature sensor data. Server front-panel, i.e., inlet air temperature is monitored and available through each server s IT manageability network that supports simple network management protocol (SNMP) or Intelligent Platform Management Interface (IPMI). Energy waste would be reduced by getting server inlet air temperature from the server manageability network and linking this to the facilities management system to control the cooling system in the data center. Accordingly, this project demonstrates and validates the ability of modern servers to provide operating information from their IT manageability network to a building control system that subsequently determines operating setpoint(s) for cooling system operations. The demonstration project s two primary goals were to: Demonstrate ability to provide operating information from IT servers to building control system Demonstrate ability to provide set point changes
4 2 Energy Efficient Data Center Demonstration Project from building control system to data center conditioning systems The ability to achieve energy reductions by using the server s on-board temperature sensors are clear since the most usual method is control via return air temperature to the conditioning unit. This exis ting approach uses a blended temperature regime that will over and under-anticipate needed cooling within the server itself. Project Outcome The primary goal for the demonstration project s proof-of-concept was achieved. Effective communications and closed-loop control were developed from the Intel Data Center servers to the Intel Facility Management System (FMS) to provide control through a programmable logic controller (PLC) with a proportional-integral-derivative (PID) control routine to adjust setpoints for supply air temperature and fan volume flow in a Emerson Electric (Liebert) CRAH unit without significant interruption or reconfiguration of the devices. Goal 1: An existing Intel data repository system, known as Chart 1: Server Temperature Control by Cooling Device
5 2009 Case Study 3 SPNet, was used for acquiring server temperatures and power data. The SPNet data application system queries the servers through an Intelligent Platform Management Interface (IPMI). Therefore, the needed control points were being collected and the SPNet could be used for moving large amounts of data to the facilities management system FMS server in real-time. Data were presented using Intel s FMS Cimplicity Human-Machine-Interface (HMI) software and stored within the system for input to the data center cooling system devices. See Chart 1; Legend, Server - Result (green line). Goal 2: Technology used to communicate with the data center s cooling system employed the existing Facility Management System, which is a GE Fanuc Series PLC system that provides process control and a GE Fanuc Cimplicity HMI system used for operator monitoring, alarms, and trending. The interface between the existing GE PLC and the Liebert CRAH unit was accomplished using a FieldServer Technologies FS-B2010 bridge, allowing direct communications from the PLC Ethernet port to the Fieldserver Modbus port. The Cimplicity FMS local script capability was used to develop graphics that allowed facility operators to choose which server temperature data points were used for control. See Chart 1: Notes. Next Steps This demonstration project s primary goals proved that the onboard server temperature sensors can provide usable data for controlling a data center s cooling devices. It is clear that energy efficiency improvements can be realized by using the actual server temperature sensors to modulate CRAH device operation rather than a surrogate temperature. However, a detailed energy reduction analysis was not completed due to a variety of complicating factors. An optimized control scheme was not achievable. The control of the CRAH unit s cooling valve was implemented by fooling the valve into modulating flow by providing it with a new set point any time a new flow value was calculated by the external PID control scheme. The control logic scheme within the CRAH unit could not be bypassed so it continually interrupted this external control scheme. Higher inlet Figure 1: Data Center Cooling Configuration air temperatures to the servers reduce chilled water use in a data center by providing both chiller-energy savings and pumping-energy savings, which can be in the range of 20 to 30%.
6 4 Energy Efficient Data Center Demonstration Project Fan speed in the CRAH unit was limited to no lower than 60% of full speed. The external PID control scheme tried to lower the fan s speed without result, again due to the onboard CRAH unit s control scheme. Energy reductions achievable with fan VSDs are well documented and could provide reductions of 30% and greater. Figure 2: Data Center Control Diagram Finally, due to the configuration of the data center area, isolation of a CRAH unit to provide dedicated cooling to only the servers used for control input was not feasible. When an optimized control schemes are implemented in data centers, energy reductions in the range of 30 to 40% can be realized.
7 This page left intentionally blank Case Study 5
8 Silicon Valley Leadership Group, 224 Airport Parkway, Suite 620 San Jose, CA svlg.net