Energy Efficiency in the Data Center

Size: px
Start display at page:

Download "Energy Efficiency in the Data Center"


1 Energy Efficiency in the Data Center Maria Sansigre and Jaume Salom Energy Efficieny Area Thermal Energy and Building Performance Group Technical Report: IREC-TR-00001

2 Title: Energy Efficiency in the Data Center Subtitle: State of the art Authors: Maria Sansigre and Jaume salom Date: July, 2011 Reference: IREC-TR IREC Project ref.: CPDs Contract No.: n/a Notes: 2


4 0. Abstract Nowadays the sustainable and efficient design, exhausting fuel reserves, global warming, responsible energy use, and operating costs are becoming critical issues for the society. These aspects are increasingly important in Data Centers due to reasons such as the following: The substantial amount of energy used by a data center (can be 100 times the watts per square meter of an office building). Operations running 24 hours, 7 days a week, have about 3 times the annual operating hours as other commercial properties. The intent of this document, is to provide the reader a general view of the data center facilities and referencing to detailed literature on the efficient design and operation that will aid in minimizing the life cycle cost, and to maximize energy efficiency in the facility to lead the advancement of sustainable building design and operations. The best practice scenario represents the efficiency gains that can be obtained through the extensive adoption of the practices and technologies used in the most energy efficient facilities in operation today. The state-of-the-art scenario identifies the maximum energy efficiency savings that could be achieved using available technologies. 4

5 1. Introduction. A data center (or data centre, DC) is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies and cooling systems, redundant data communications connections, environmental controls and security devices. During the boom of the microcomputer industry in the 1980s, computers started to be deployed everywhere, in many cases with little or no care about operating requirements. As information technology (IT) operations started to grow in complexity during the 1990s, companies grew aware of the need to control IT resources. These new equipments (servers) started to find their places in the old computer rooms. The use of the term "data center," as applied to specially designed computer rooms, started to gain popular recognition about this time. The boom of data centers came during the dot-com bubble ( ). Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide businesses with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results. As of 2007, data center design, construction, and operation is a well-known discipline. A recent study found that IT is responsible for about 2% of global greenhouse gas emissions [1], about as much as the aviation industry. Furthermore, it is projected that this contribution would be the double by the year The increasing environmental concern and regulatory action, is challenging how IT solutions are designed and managed across their lifecycles. Data centers are a prominent component of this impact due to its fastest growing. By the other hand, today, Data centers are found in nearly every sector of the economy, from businesses to governmental and non-governmental organizations, and are essential to modern day society especially since the World s economy has seen changed its information management from being paper based to electronic. The data centre industry is experiencing a major growth period. This has been stimulated by an increase in demand for data processing and storage. Some reasons for that are listed below: The financial services are seeing an increase in use of electronic transactions such as online banking and electronic trading on the stock market. There is a growth of internet services such as music downloads and online communications. There has been a growth of global commerce and services. The adoption of satellite navigation. 5

6 Electronic tracking in the transportation industry. The transition of records keeping from paper based to electronic, for example medical records are all saved in a database, businesses now save employee records electronically, and often companies are required by law to hold records electronically for a set amount of years. Data centers have become common and critical for the functioning of businesses, due to their running on the equipment (hardware) housed on it. 6

7 2. Data Center description. 2.1 Overview IT operations are a crucial aspect for most organizational procedures. One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. A Data Center houses the electronic devices responsible to run IT operations (called IT equipment).this equipment is placed in cabinets known as racks. These racks are orderly distributed in a room (IT Room), called sometimes white space [2], that refers to the usable raised floor area in square meters or footage. Figure 1: Heat dissipation process in the Data Center. Source: IREC Servers and related IT equipment generate a considerable amount of heat in a small area during their operation. Furthermore, the IT equipment is highly sensitive to temperature and humidity fluctuations, so a data center must keep restricted power and cooling conditions for assuring the integrity and functionality of its hosted equipment. 7

8 A Data Center must provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. There are four American institutions that have developed Data Center Standards to design and support the required conditions: TIA (Telecommunications Industry Association): TIA-942 Standard The Uptime Institute : Data Center Site Infrastructure Tier Standard: Topology ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers):Data Center Design and Operation Series NFPA (National Fire Protection Association): NFPA 75 :Standard for the Protection of Electronic Computer/Data Processing Equipment The TIA- 942 Standard specifies the minimum requirements for telecommunications infrastructure applicable to any size data center. [2] For the operational parameters and the HVAC facilities and requirements, the ASHRAE has published Data Center Design and Operation series, [3-10]. Referred to the reliability and liability requirements, the Uptime institute gives a Data Center classification[11]. To global understand the data center, in next sections are described the general aspects of DC infrastructure, topology and requirements, relating to the reference standards. 2.2 DC Structure The first institution that begun to develop the computer room structure was the Telecommunications Industry Association, as the IT equipment is mainly developed by the Information and technology (IT) industry. Consequently, the main aspects given by the TIA are the related to the cabling system and equipment distribution referred to the communications needs. The standard will enable the DC design to be considered early in the building. The data center requires spaces dedicated to supporting the telecommunications infrastructure. Telecommunications spaces shall be dedicated to support telecommunications cabling and equipment. Typical spaces found within a computer or IT room generally include the entrance room, main distribution area (MDA), horizontal distribution area (HDA), zone distribution area (ZDA) and equipment distribution area (EDA). Depending upon the size of the data center, not all of these spaces may be used within the structure. These spaces should be planned to provide for growth and transition to evolving technologies. These spaces may or may not be walled off or otherwise separated from the other computer room spaces. Next table summarizes the DC Space Structure and distribution given by the TIA into three main categories: Computer or IT Room/s Telecommunications Room 8

9 DATA CENTER Data Center Support Areas TIA 942 Data Center Space Structure Cabling and Networking Main distribution area (MDA) Horizontal distribution area (HDA) Zone distribution area (ZDA) Equipment distribution areas Servers, Storage Computer/ IT Room Racks and Cabinets "Hot" and "cold" aisles Equipment placement Placement relative to floor tile grid Access floor tile cuts Installation of racks on access floors Specifications Clearances Ventilation Rack height, depth and width Rack and cabinet finishes Telecomunications Room Data center support areas UPS, Cooling, Switch boards Generator Figure 2: Data Center Space Structure. Source TIA 942 This space description gives the key to understand the Data Center equipment distribution. 9

10 It room layout may consider two main spaces, the cabling and networking area and the IT equipment area. By other side, the Data Centre infrastructure must include the additional space and equipment required to support data center operations, including power transformers, uninterruptible power supply (UPS), generators, computer room air conditioners (CRACs), chillers, air distribution systems, etc. In next figure is shown the distribution proposed by the TIA in the data center. Figure 3: Distribution Areas in the DC. Source: TIA

11 By other hand, The Green Grid has recently described another space division based on the physical infrastructure main areas[12]. All power and cooling physical infrastructure is located in at least one of these three zones. The zones are categorized as follows and are shown in next figure: ZONE 1: inside the building and inside the physical data center (IT Room) IT equipment Power distribution units ZONE 2: inside the building but outside of the IT Room UPS, Generator CRACs ZONE 3: outside of the building: Chillers, Cooling Tower Storage tank Figure 4: Example of zones categorization in the DC. Source : TIA 942 This classification is done according to power and cooling distribution areas. 11

12 2.3 DC Requirements The IT room is an environmentally controlled space that houses equipment and cabling directly related to the computer and other telecommunications systems. The TIA-942 focuses on the data center cabling system infrastructure, redundancy and design and outlines the other room requirements, referencing to other specific standards. Other organizations establish the requirements for environmental conditions (ASHRAE) and fire protection in computer rooms (NFPA 75). The TIA-942 standard groups the IT or Computer Room requirements into five broad categories: Location /Access Architectural design Environmental Design Electrical Design Fire Protection Water Infiltration TIA 942 Data Center Requirements Location /Access Architectural design Size Guidelines for other equipment Ceiling heights Lighting Doors Floor loading / Raised floor Seismic considerations Environmental design (ASHRAE) Operational parameters [3] Contaminants Vibration [8] HVAC Electrical Design Power (UPS) Emergency power Bonding and grounding Fire protection (NFPA 75) Water Infiltration 12 Figure 5: Data Center Requirements. Source: TIA 942

13 To understand the room requirements, the first step is to check what are the IT equipment specific conditions given by each manufacturer in the product datasheet: Figure 6: Sun Oracle Storage Server Physical specifications. Source: Sun Microsystems In the product datasheet appears the equipment specifications classified in four main groups: Equipment Dimension Environment (Temperature, Cooling and Airflow) Power Emissions The equipment dimension and quantity is necessary to select the kind of rack to house it, and to check the IT room area and floor resistance required for the rack. Room environment condition is understood as the air temperature and relative humidity necessary for the correct operation of the equipment. By other hand, the datasheet usually gives the equipment heat dissipation in BTU/h (British Thermal Unit = W.h) and the airflow needed to compensate the heat generated in CFM (cubic foot meters). 13

14 A general rule to know the amount of heat dissipation per equipment is that for every watt of power consumed, a thermal watt of heat is generated. The ASHRAE provides environmental conditions for electronic equipment and for facility operation in the Thermal Guidelines for Data Processing Environments[3]. Four environmental classes are described: Class 1: Typically a data center with tightly controlled environmental parameters (dew point, temperature, and relative humidity) and mission critical operations; types of products typically designed for this environment are enterprise servers and storage products. Class 2: Typically an information technology space or office or lab environment with some control of environmental parameters (dew point, temperature, and relative humidity); types of products typically designed for this environment are small servers, storage products, personal computers, and workstations. Class 3: Typically an office, home, or transportable environment with little control of environmental parameters (temperature only); types of products typically designed for this environment are personal computers, workstations, laptops, and printers. Class 4: Typically a point-of-sale or light industrial or factory environment with weather protection, sufficient winter heating, and ventilation; types of products typically designed for this environment The IT equipment housed on the DC generally corresponds to classes 1 and 2. The next table is delineated for both equipment operation and equipment power off. The equipment operating environmental conditions, including allowable and recommended values, refer to the state of the air entering the electronic equipment (Inlet Conditions). The conditions for Classes 1 through 4 are the result of consensus among the many environmental specifications of manufacturers of IT equipment. Figure 7: Equipment environment specifications. Source: ASHRAE

15 The datasheet must give the airflow protocol of the equipment. Depending on this aspect, the inlet surface (cold air entering to the equipment) and the outlet surface (exit surface for the heat generated) can vary, as is shown in next picture. The airflow configuration determines the cooling design and operation inside the room and the layout of the IT equipment. Figure 8: Airflow Patterns. Source: ASHRAE Another example of cooling requirements datasheet is the model proposed by the ASHRAE in [3]. Figure 9: IT equipment airflow requirements datasheet. Source: ASHRAE 15

16 The IT equipment uses power to run, and this amount of power demand is needed during the implementation of the electrical system. The equipment manufacturer (IBM, HP, Sun, Dell.) gives always the peak value for the power consumption and the heat dissipation, so the cooling and electric facilities must be designed considering these peaks. Usually these values correspond to 1 or 2 U (unit) of equipment. For example, a conventional rack can hold up to 42 U, so, for a server (1 U) of 1kW of power use, this rack at full capacity can demand 42 kw of power and 42 kw t of cooling. In addition, other important parameter provided in the datasheet is the Electromagnetic Interference (EMI) on the communication network. EMI is the interference emitted from an electrical device that has an adverse effect on the function of a surrounding or connected device. In the data center there is a significant quantity of electronic devices requiring power as well as communication. The adverse effects of EMI on data communication can be mitigated in the data center by following industry accepted standards and by using best practices for grounding, cable routing and separation. The Federal Communications Commission (FCC) of America defines acceptable limits for radiated and conducted emissions. To ensure EMC between devices, only FCC Class-A compliant devices should be deployed in the data center. The IT room should meet the NFPA 75 (Standard for the Protection of Electronic Computer/Data Processing Equipment), this standard outlines requirements for computer installations needing fire protection and special building construction, rooms, areas, or operating environments[13]. The floor layout should be consistent with equipment and facility providers requirements, examples are listed below: Floor loading requirements including equipment, cables, patch cords, and media (static concentrated load, static uniform floor load, dynamic rolling load). Service clearance requirements (clearance requirements on each side of the equipment required for adequate servicing of the equipment). Air flow requirements. Mounting requirements. DC power requirements and circuit length restrictions. Equipment connectivity length requirements (for example, maximum channel lengths to peripherals and consoles). 16

17 2.4 DC Design Process The steps in the design process described below apply to the design of a new data center or the expansion of an existing data center. It is essential for either case that the design of the telecommunications cabling system, equipment floor plan, electrical plans, architectural plan, HVAC, security, and lighting systems be coordinated. Ideally, the process should be: a) Estimate IT equipment space, power, and cooling requirements of the data center at full capacity. Anticipate future IT equipment power and cooling trends (DC expected grow) over the lifetime of the data center. b) Provide space, power, cooling, security, floor loading, grounding, electrical protection, and other facility requirements to architects and engineers. Provide requirements for operations center, loading dock, storage room, staging areas and other support areas. c) Coordinate preliminary data center space plans from architect and engineers. Suggest changes as required. d) Create an equipment floor plan including placement of major rooms and spaces for entrance rooms, main distribution areas, horizontal distribution areas, zone distribution areas and equipment distribution areas. Provide expected power, cooling, and floor loading requirements for equipment to engineers. Provide requirements for telecommunications pathways. e) Obtain an updated plan from engineers with telecommunications pathways, electrical equipment, and mechanical equipment added to the data center floor plan at full capacity. f) Design telecommunications cabling system based on the needs of the equipment to be located in the data center. 2.5 DC Classification A DC is a facility that works every 24 hours the 365 days of the year. As is known, the critical operations of a company are running on it, so the electrical and in general system failure is the key for design considerations. Reliability and continuous operation are main aspects to be considered during the DC design, these aspects are accomplished trough redundancy. Is understood as redundancy the system feature of having reserves of components to provide support in case one component fails to ensure the service continuity that the system performs. N is the number of elements to satisfy the operation normal conditions. Redundancy is often compared to the baseline of N, and N + 1, N + 2, 2N and 2(N + 1) examples of redundancy levels. Depending on the level s of availability and redundancy of the data center facility infrastructure, the Uptime Institute defines four tier ratings. The Uptime Institute has published the Data Center Site Infrastructure Tier Standard[11]. This standard describes criteria to differentiate four classifications of site infrastructure topology based on increasing levels of redundant capacity components and distribution paths. This standard focus on definitions of the four Tiers and the performance confirmation tests for determining compliance to the definitions. The next figure summarizes the main aspects of the Tier Standard. The design and redundancy improvements obtain a higher site availability. 17

18 Availability is a percentage value representing the degree to which a system or component is operational and accessible when required for use. Availability is composed of two variables, mean time between failure (MTBF) and mean time to repair (MTTR). MTBF is a basic measure of a system s reliability. Typically, it means the loss of critical processing equipment. Mean time to repair (MTTR) is the expected time to recover a system from a failure and is equally important because the time to recover could be much greater than the time for service personnel to respond. Together, these two factors attempt to quantify the expected availability of a critical system or, in other words, its expected uptime. The formula below illustrates how both MTBF and MTTR impact the overall availability of a system. As the MTBF goes up, availability goes up. As the MTTR goes up, availability goes down. Availability= MTBF/ (MTBF+ MTTR) Figure 10: Uptime Institute Availability Calculation Figure 11: Scheme of Tier Level Infrastructure Requirements. Source: Uptime Institute In next sections is explained the main aspects of each Tier level Tier I: Basic Site Infrastructure The fundamental requirement: 18

19 a) A Tier I basic data center has non-redundant capacity components and a single, nonredundant distribution path serving the computer equipment. The performance confirmation tests: a) There is sufficient capacity to meet the needs of the site. b) Planned work will require most or all of the site infrastructure systems to be shut down affecting computer equipment, systems, and end users. The operational impacts: a) The site is susceptible to disruption from both planned and unplanned activities. Operation (Human) errors of site infrastructure components will cause a data center disruption. b) An unplanned outage or failure of any capacity system, capacity component, or distribution element will impact the computer equipment. c) The site infrastructure must be completely shut down on an annual basis to safely perform necessary preventive maintenance and repair work. Urgent situations may require more frequent shutdowns. Failure to regularly perform maintenance significantly increases the risk of unplanned disruption as well as the severity of the consequential failure Tier II: Redundant Site Infrastructure Capacity Components The fundamental requirement: a) A Tier II data center has redundant capacity components and a single, non-redundant distribution path serving the computer equipment. The performance confirmation tests: a) Redundant capacity components can be removed from service on a planned basis without causing any of the computer equipment to be shut down. b) Removing distribution paths from service for maintenance or other activity requires shutdown of computer equipment. The operational impacts: 19

20 a) The site is susceptible to disruption from both planned activities and unplanned events. Operation (Human) errors of site infrastructure components may cause a data center disruption. b) An unplanned capacity component failure may impact the computer equipment. An unplanned outage or failure of any capacity system or distribution element will impact the computer equipment. c) The site infrastructure must be completely shut down on an annual basis to safely perform preventive maintenance and repair work. Urgent situations may require more frequent shutdowns. Failure to regularly perform maintenance significantly increases the risk of unplanned disruption as well as the severity of the consequential failure Tier III: Concurrently Maintainable Site Infrastructure The fundamental requirements: a) A Concurrently Maintainable data center has redundant capacity components and multiple independent distribution paths serving the computer equipment. Only one distribution path is required to serve the computer equipment at any time. b) All IT equipment is dual powered and installed properly to be compatible with the topology of the site s architecture. Transfer devices, such as point-of-use switches, must be incorporated for computer equipment that does not meet this specification. The performance confirmation tests: a) Each and every capacity component and element in the distribution paths can be removed from service on a planned basis without impacting any of the computer equipment. b) There is sufficient permanently installed capacity to meet the needs of the site when redundant components are removed from service for any reason. The operational impacts: a) The site is susceptible to disruption from unplanned activities. Operation errors of site infrastructure components may cause a computer disruption. b) An unplanned outage or failure of any capacity system will impact the computer equipment. c) An unplanned outage or failure of a capacity component or distribution element may impact the computer equipment. 20

21 d) Planned site infrastructure maintenance can be performed by using the redundant capacity components and distribution paths to safely work on the remaining equipment. e) During maintenance activities, the risk of disruption may be elevated Tier IV: Fault Tolerant Site Infrastructure The fundamental requirements: a) A Fault Tolerant data center has multiple, independent, physically isolated systems that provide redundant capacity components and multiple, independent, diverse, active distribution paths simultaneously serving the computer equipment. The redundant capacity components and diverse distribution paths shall be configured such that N capacity is providing power and cooling to the computer equipment after any infrastructure failure. b) All IT equipment is dual powered and installed properly to be compatible with the topology of the site s architecture. Transfer devices, such as point-of-use switches, must be incorporated for computer equipment that does not meet this specification. c) Complementary systems and distribution paths must be physically isolated from one another (compartmentalized) to prevent any single event from simultaneously impacting both systems or distribution paths. d) Continuous Cooling is required. The performance confirmation tests: a) A single failure of any capacity system, capacity component, or distribution element will not impact the computer equipment. b) The system itself automatically responds to a failure to prevent further impact to the site. c) Each and every capacity component and element in the distribution paths can be removed from service on a planned basis without impacting any of the computer equipment. d) There is sufficient capacity to meet the needs of the site when redundant components or distribution paths are removed from service for any reason. The operational impacts: a) The site is not susceptible to disruption from a single unplanned event. 21

22 b) The site is not susceptible to disruption from any planned work activities. c) The site infrastructure maintenance can be performed by using the redundant capacity components and distribution paths to safely work on the remaining equipment. d) During maintenance activity where redundant capacity components or a distribution path shut down, the computer equipment is exposed to an increased risk of disruption in the event a failure occurs on the remaining path. This maintenance configuration does not defeat the Tier rating achieved in normal operations. e) Operation of the fire alarm, fire suppression, or the emergency power off (EPO) feature may cause a data center disruption. A summary of the preceding requirements defining the four distinct Tier classification levels is in next table. Figure 12: Summary Tier Levels. Source: Uptime Institute 22

23 3. Data Center Energy consumption. A data centre uses 10 to 100 times more energy per square meter than a typical office building [14].In fact, European Data Center electric consumption in 2007 was 56 TWh and the estimation for 2020 rises to 104 TWh. More precisely in Spain, a data center built on 2010 can demand 40 times the electricity that an office building. Data Center consumption is partially due to the density of equipment on it (Data centers are not designed for humans but for computers, and as a result typically have minimal circulation of fresh air and no windows), and to the fact that the data centre runs continuously. The Data Center electric consumption belongs mainly to two groups of load: IT Load: this relates to the consumption of the IT equipment in the data center; and can be described as the IT work capacity employed for a given IT power consumption. It is also important to consider the use of that capacity as part of efficiency in the data centre. Facilities Load : this relates to the mechanical and electrical systems that support the IT electrical load such as cooling systems (chiller plant, fans, pumps) air conditioning units, UPS, PDU s etc % 45-55% Figure 13: Data Center electrical consumption breakdown. Source: IBM In figure 12 appears the general data center electric consumption demand in Europe. The green color represents the facilities load contribution and the blue one, the It load contribution. The power consumption that belongs to the IT equipment usually represents the 45-55% of the total in today data centers, approximately the same power needed by the infrastructure to cover the IT equipment requirements. 23

24 Figure 14: Data Centre average Power allocation. Source: ASHRAE Figure 15: Historical data center energy consumption with future energy use projections. Source: EPA The facilities load can be breakdown in two principal groups: the Cooling System and the Electrical & building Systems. 24

25 As shown in previous pictures, the principal energy use belongs to IT Equipment s consumption. The energy efficiency on Electronics and IT Equipment is out of the scope of this document. This task belongs to the HW manufacturers, developers and end users. The second main group of consumption in a data center is the Cooling System. To better understand the breakdown of data center power use, the first aspect to analyze is the IT Load and its nature depending on the short of equipment that constitutes it. The IT equipment requirements will determine the power of the infrastructure needed, and in a way, the facilities load (whenever the facilities design is well done). (See section 4.1.1) When a Data Center is designed, the first data needed is the short of IT equipment that will be housed in the data centre and its power demand. Estimating power use of IT equipment isn t easy. The power use of electronic equipment varies with hardware configuration and class, usage, and environmental conditions. The power supplies for these devices are sized for the maximum loads expected when the server is fully configured, so the actual measured loads observed in typical installations are much lower than the rated power of the power supply.[15] IT equipment technology is advancing at a rapid pace, resulting in relatively short product cycles and an increased frequency of equipment upgrades. Since IT facilities that house this equipment, along with their associated infrastructure, are composed of facilities that are typically conceived to have longer life cycles, any modern IT facility design needs the ability to continuously accommodate the multiple IT equipment deployments that will experience during its lifetime. Based on the latest information from all the leading IT equipment manufacturers, the ASHRAE has published the Datacom Power Trends and Cooling Applications [4],that provides new and expanded IT equipment power trend charts to allow the DC facilities designer to more accurately predict the IT equipment loads that the facility can expect to house in the future, as well as provide ways of applying the trend information to DC facility designs today. Next figure shows the trend for the heat load per IT equipment and per year. As explained in previous section, the heat dissipation in the It equipment is directly proportional to the electrical consumption. This chart helps in the DC power estimation for future needs of the facility. 25

26 Figure 16: DataCom Power Trends. Source: ASHRAE When considering the whole facility or even just a IT room within the facility, semiconductors and chips seem like a tiny element and of little importance or relevance. However, semiconductors and chips have a major impact on the load of an IT facility and are a critical source for predicting the loads, especially future loads. Since the chips are the primary component used in the IT equipment, the chip trends can be considered an early indicator to future trends in that equipment. The power trend of this chart has been created with the information trends at the chip level given by Moore s Law. Figure 17: Moore s Law. Source: ASHRAE 26

27 Moore's law describes a long-term trend in the history of computing hardware. The number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every 18 months. The trend has continued for more than half a century and is not expected to stop until 2015 or later. Consequently, the increase of the transistors causes the frequency rising on chip, what increases the heat dissipation. 27 Figure 18: Heat Dissipation in Transistors. Source: ASHRAE The processor is the primary source of heat generation within a piece of electronic equipment with surface temperatures rising to greater than 100 ºC.The processor typically has some means of integral cooling to transport the heat away from the chip surface. The air is channeled within the server to transport the heat that is generated by the server components through convection before the server fans exhaust the warmer air back out to the surrounding environment. The provisioning is a main aspect to take into account while the DC is being designed. The provisioning refers to planning and allocating resources (financial, spatial, etc.) to accommodate changes that may be required in the future. Provisioning could result in spatial considerations, such as providing additional floor area to accommodate the delivery and installation of additional equipment, or provisioning could have a more direct impact on the current design, such as oversizing distribution infrastructure (e.g., pipe sizes, ductwork, etc.) to be able to handle future capacity. As the IT equipment power needs is growing to have more processing capacity, the new challenge for hardware manufacturers (IBM, HP, Dell, Sun ) is to deploy new systems that has more process capacity with the same power demand. This is the key to deploy the Sustainable Data Center. In the other way, the challenge for engineers is to build the facility the more sustainable as possible, and for this, it is necessary give to the IT equipment exactly what is demanding, that is producing basically the electricity and cooling needed, the more efficiently and reliable as we can.

28 That is why the energy monitoring systems are crucial to be implemented on the data center facility. The consumption of the IT equipment varies with the operations supported by it, so depending on the use, two data center with the same equipment and infrastructure can present different electric and thermal loads and consumptions. The Best Practice Scenario is the first to show a drop in data center energy consumption in 2011 compared to 2006 levels. This scenario includes energy-saving measures such as moderate consolidation of data centers, aggressive adoption of energy-efficient servers, and use of improved fans, chillers, and free cooling. The result reaches just under 40 billion kilowatt hours of energy consumption. The State of the Art Scenario demonstrates the most drastic data center energy consumption reduction. The scenario includes all changes within the Best Practice Scenario and adds power management applications, liquid cooling, and combined heat and power. The result reaches just over kilowatt hours of energy consumption.[16] Many data centers owners and designers are simply not aware of the financial, environmental and infrastructure benefits to be gained from improving the energy efficiency of their facilities. Due to this fact, on December 2006,President George W. Bush signed a bill that required the U.S. EPA (Environmental Protection Agency), to study the growth and energy consumption of data centers with the intention to determine best practices for not only data centre energy efficiency, but also for cost efficiency [17].Since that time, the Lawrence Berkeley National Laboratory has developed a list of 67 best practices for optimal data centre design [18]. They are broken up into the following categories: Mechanical, IT equipment, Electrical Infrastructure, Lighting, and Commissioning and Retro-commissioning[19]. By other hand, on November 2009, the European Union has published the Code of Conduct on Data Centers Energy Efficiency[20], in response to increasing energy consumption in data centers and the need to reduce the related environmental, economic and energy supply security impacts. This Code of Conduct proposes general principles and practical actions to be followed by all parties involved in data centers, operating in the EU, to result in more efficient and economic use of energy, without jeopardising the reliability and operational continuity of the services provided by data centers. A typical rack of new servers draws approximately 5-45 kilowatts of power alone, and a data centre houses hundreds of these racks [21].Running all day every day, one 20 kilowatt rack alone uses over kwh annually. That s over twenty five times the amount of energy in average that a European resident uses in a year[22]. According to the EPA a 10% of energy savings by all US data centers would save 10, kwh annually, enough energy to power one million US households [14]. 28

29 4. Data Center Energy Efficient Facilities 4.1 DC Facilities Overview The IT equipment requirements give the key for the dimensioning of the data center facilities. As mentioned in section 2, the IT equipment main requirements to operate correctly are: Power to run. Cold air to remove the heat generated by the equipment during its operation. Safe and Secure environment Continuous operation: 24 hours x 7 days/week Data and communication supply This requirements are achieved through these facilities: Power: Electrical System Cold air : Cooling System Safe and secure environment: Fire Protection System, Water Detection System, Access Control. Continuous Operation: Uninterruptible Power Supply, Emergency Generation System, Equipment redundancy Cabling and Networking System Other main aspect of the data center infrastructure is the architectural requirements, which implies special considerations: the raised floor, walls, doors, technical areas The intent of the present document is the understanding of the IT Infrastructure requirements from the point of view of energy efficiency in the physical infrastructure, not on the IT equipment. In section 3 is described the data center power consumption breakdown. As is shown, the physical infrastructure main groups of power demand are the electrical and cooling facilities. The cabling and networking systems are out of the document scope. These systems are explained in detail in the TIA-942[2]. The main objective then, is the description of the requirements and the facilities to optimize the energy consumption. As a consequence, the safety and security facilities, such as access control, fire protection and water detection are just mentioned and referenced. In resume, the data center facilities scope of this section are: Cooling System Electrical System 29

30 4.2 Mechanical. HVAC One of the most important part of the data center infrastructure are the HVAC facilities. Next figure illustrates an example of a data center cooling system. Chiller Plant, Cooling Tower or Dry Cooler.. Cold Water Production Piping, Pumps, Collectors Water Distribution Computer Room Air Conditioning Unit (CRAC) Cold Air Production Raised Floor Plenum/ Perforated Tiles/ Room Air Distribution Racks Heat Load Figure 19: Example of a conventional cooling system for a Data Center: Source: IREC The cooling system can be divided in three main parts located in different places of the building: Cold Water Production and distribution situated outside the data center in Zone 3 (Building Roof) [12]. Cold air production situated in Zone 2 (technical room outside the IT Room) or in zone 1 (IT Room)[12]. The air distribution and delivery to heat load is situated in Zone 1 (IT Room)[12]. The efficiency and effectiveness of a datacenter cooling system is heavily influenced by the path, temperature and quantity of cooling air delivered to the IT equipment and waste hot air removed from the equipment. From the point of view of energy efficiency in the facility, where is possible to can earn more energy, because of the complexity of the load, is in the cooling system (see section 3). Depending on the data center location and density load there are multiples open challenges to deploy new concepts for cooling the IT equipment. 30

31 HVAC Load Considerations A data center houses different classes of equipment. Each short of equipment has a range of power consumption and as a consequence, a different amount of heat dissipation. This aspect justify why a data center usually represents a heterogeneous heat load environment. The figure 18 has been done basing on the ASHRAE power trend chart (figure 15) and with a reference rack footprint of 0,65 m 2.The chart shows the heat dissipation by short of IT equipment: Figure 20: Heat Load trends per rack by short of Equipment. ASHRAE 31

32 Depending on the amount of heat dissipated per rack, there is a classification in terms of heat density: 1. Normal Density: < 20 kw/rack 2. High density : >20 kw/rack- <35 kw/rack 3. Extreme density >35 kw/rack. Each kind of equipment presents a heat load. The addition of the heat load of all the racks housed in the IT room, gives the cooling demand of the room. The division of this value and IT Room floor area is the heat density (W/m 2 ). Figure 21: Total dissipation in IT Room. Source: IREC Figure 22: IT Rooms Heat density. Source: IREC Where: [W] is the heat dissipated per IT equipment [m 2 ] IT Room Area [W] IT Room Cooling demand The short of equipment housed in the IT Room, then, will determine the size of the infrastructure needed. For example a computer room, usually, has higher power and cooling requirements than a Bank data center, due to the nature of the IT equipment, consisting basically in Compute blade servers and high density communication hardware (high and extreme density equipment). By other side, a Bank data center will have stronger requirements in terms of reliability with higher tier levels in design and operation (see Section 2). These last aspects, will determine the basis for the conception of the facilities. 32

33 Airflow Management The thermal environment of data centers plays a significant role in affecting the energy efficiency and the reliability of data center operation. As is known, it is necessary to remove the heat dissipated by the IT equipment during its operation. The heat removed in a rack by the airflow is represented in this expression [5]: Figure 23: IT equipment heat dissipation removal Where: [kw] is the heat to be removed by the cooling system air. [kg/m 3 ] Air density (constant). [m 3 /s] Cold airflow through the rack. [kw/kg.k] is the air specific heat at constant pressure (constant). [K] is the medium Delta T that the air reaches going through the rack. For a constant heat removed or dissipated there are two parameters that can vary, the airflow and the delta T. First data center cooling system consisted in the impulsion of cold air directly to IT Room ambient. This way of cooling presents airflow inefficiencies around the equipment that forces the impulsion of more air than necessary.the next improvement needed is to manage the airflow efficiently to achieve the next objectives: Eliminate mixing and recirculation around the IT equipment. Maximize return air temperature by supplying air directly to the loads. Cooling air should be supplied directly to the IT equipment air intake location. The achievement of these aspects begins through the installation of Computer Room Air Conditioning units (CRAC) with impulsion to under floor plenum. A perforated tile is placed near the rack, so the impulsion that way is nearest that in the previous configuration. The under floor plenum function not was only for cabling and electric lines allocation, but started to be a plenum to homogenize the air s pressure from the CRACs to the point of impulsion. 33

34 Figure 24: IT Room cooling system principles. ASHRAE Assuming this way of cooling the IT equipment as the basis of the IT Room cooling system, there are several strategies to improve the air management and ultimately the energy efficiency of the system [5],[18],[19] : Hot Aisle/Cold Aisle configuration: Arranging the IT equipment with this layout, the cold air flows through the perforated tiles placed in the cold corridor in front of each rack and gets into it. The heat removed from each rack comes out by its rear door and returns to the cooling system flowing through the hot corridor until it arrives to the CRAC units. Figure 25: Hot/Cold Aisle layout. Source: Rittal Flexible strip curtains: Use flexible strip curtains to improve the separation by blocking open space above the racks. Figure 26: Flexible Strip Curtains on cold aisle. Source: Rittal Blank unused rack positions. Standard IT equipment racks exhaust hot air out the back and draw cooling air in the front. Openings that form holes through the rack should be blocked in some manner to prevent hot air from being pulled forward and recirculated back into the IT equipment. 34

35 Figure 27: Blanking panels of different dimensions. Source: Rittal Figure 28: Blanking Panels benefits Source: Rittal Design for IT airflow configuration. Some IT equipment does not have a front-to-back cooling airflow configuration. Configure racks to ensure that equipment with side-to-side, top-discharge, or other airflow configurations reject heat away from other equipment air intakes. Use appropriate diffusers: Diffusers should be selected that deliver air directly to the IT equipment, without regard for drafts or throw concerns that dominate the design of most office-based diffusers, there are several solutions for it and the most popular is to use perforated tiles. Figure 29: Perforated Tile for data center raised floors Source: TROX 35

36 Position supply and returns to minimize mixing and short circuiting. Diffusers should be located to deliver air directly to the IT equipment. At a minimum, diffusers should not be placed such that they direct air at rack or equipment heat exhausts, but rather direct air only towards where IT equipment draws in cooling air. Supplies and floor tiles should be located only where there is load to prevent short circuiting of cooling air directly to the returns; in particular, do not place perforated floor supply tiles near computer room air conditioning units using the as a return air path. Figure 30: Undesired effects caused by the air recirculation. Source: Rittal Minimize air leaks in raised floor systems. In systems that utilize a raised floor as a supply plenum, minimize air leaks through cable accesses in hot aisles, where supply air is essentially wasted. Also implement through policy or design control of supply tile placement to ensure that supply tiles are not placed in areas without appropriate load and/or near the return of the cooling system, where cooling air would short-circuit and, again, be wasted. Figure 31 Air leaks in raised floor systems. Source:Rittal When the CRAC units are placed inside the room, the optimal position is perpendicular with the hot aisles (to avoiding air short circuit due to CRAC proximity to first perforated tiles of each row). The reason for this is that as the impulsion is done by under floor to improve the 36

37 pressure and the airflow distribution, the return of hot air to CRAC units is facilitated if it is aligned with hot corridors, where the hot air flows from the racks to the ambient of IT room. (See Section 4.1.3). Whenever it is possible, place the CRAC units in a room adjacent (called cooling or technical corridor) lined up to the common wall of the IT room. Depending on the IT Room sizes and density loads maybe is necessary to have two cooling corridors (See Section 4.1.3). Provide adequately sized return plenum or ceiling height. Overhead return plenums need to be sized to allow for the large quantities of air flow that is required. Provide adequately sized supply. Underfloor supply plenums need to be sized to allow for the large quantities of air flow that is required. Common obstructions such as piping, cabling trays, or electrical conduits need to be accounted for when calculating the plenum space required. Blockages can cause high pressure drops and uneven flow, resulting in hot spots in areas where cooling air is short circuiting to the return path. Figure 32. Examples of cabling trays placed in under floor plenums. Source:IBM With all these strategies, the airflow management inside the room is improved. Last trends on data center cooling system designs, as the ASHRAE recommend [10], are settled in providing higher inlet temperatures to IT equipment to be more efficient. Lots of data centers today presents air inlet temperatures around 14-15ºC. To be more energy efficient we can set up the inlet temperature to IT room from 18-27ºC [10, 23]. By other hand, while this new temperature range is being established, a dominant problem became to appear as the new IT equipment heat dissipation started to grow caused by its higher power demands. Associated with this increase, is the recirculation of hot air from the rack outlets to their inlets, causing the appearance of hot spots and an uneven inlet temperature distribution. Next pictures show what occurs: 37

38 Figure 33: Hot air recirculation in racks. Source: IBM Figure 34: Hot air recirculation. Source: 42U The pictures show the consequences of the combination of two aspects: 1. The rising of the inlet temperature (18-27 ºC) increases the amount of heat recirculation, caused by the decreasing of the air density with temperature, making easier the air recirculation from the back of the rack to the front of it. 2. The CRAC system delivers the cold air with a maximum delta T of K.( Section ). This limitation on delta T value forces to impulse more airflow to the room as the thermal load is higher (see Equation number 1). The increase of airflow necessary to compensate the thermal load causes disturbances on the under floor plenum air pressure, rising the air velocity through the tiles bypassing the server inlet sometimes.( In a study done by the Uptime Institute, 59% of the cold air was bypassing the server inlet.[24]) The hot spots cause the inefficient cooling of the equipment situated on the rack s upper positions, which can interrupt their operation due to the higher inlet temperatures. This last aspect justifies the data center manager efforts on the improvement of cooling techniques. To minimize this heat recirculation around the equipment there are few techniques: 38

39 Use an appropriate pressure in under floor supply plenums. A pressure too high will result in both higher fan costs and greater leakage and short circuiting of cooling air. A pressure too low can result in hot spots at the area most distant from the cooling supply air point and result in poor efficiency 'fixes' such as a lowering of the supply air temperature or overcooling the full space just to address the hot spots. Rigid enclosures: Build rigid enclosures to fully separate the heat rejected from the rear of IT equipment from the cold air intakes on the front. The basic scheme of effects achieved through containment is shown in next figure: In this way there are several possibilities: Figure 35: Consequences of air containment. Source: IREC o Cold aisle enclosure: With this solution all the cold air that arrives to the corridor gets into the racks. The improvement of efficiency is given by taking advantage of the inlet airflow to the rack that has a temperature and flow more uniform than in previous configuration. By other hand, the enclosure stops physically the air recirculation from the racks back, so the appearance of hot spots is disabled. Figure 36: Cold aisle enclosure o Hot aisle containment: In this configuration the hot air gets enclosed and conducted to the CRAC units, so it is reached higher return air temperatures. Higher return temperatures allow for greater savings from economization and lower fan volume 39

40 requirements; the higher the Delta T between supply and return, the best reduction in fan power possible. Figure 37: Hot aisle enclosure with air suction. Source: 42U Figure 38: Exhaust containment chimney The airflow management best practice scenario results on the implementation and developing of few cooling techniques depending on the rack thermal load (function of IT equipment application and HW configuration)[25]. (See section ) Air Handler Systems It may be desirable for HVAC systems serving IT equipment facilities to be independent of other systems in the building, although cross-connection with other systems may be desirable for backup. Redundant air-handling equipment is frequently used, normally with automatic operation. A complete air-handling system should provide ventilation air, air filtration, cooling and dehumidification, humidification, and heating. Data Center cooling systems should be independent of other systems and may be required all the year, depending on design. IT rooms can be conditioned with a wide variety of systems, including packaged computer room airconditioning (CRAC) units and central station air-handling systems. Air-handling and refrigeration equipment may be located either inside or outside IT rooms. 40

41 CRAC Units Computer room air-conditioning (CRAC) units are the most popular cooling solution for IT Rooms. Figure 39: Typical CRAC/CRAH Unit with Downflow -Top air suction: Source: ASHRAE CRAC units are specifically designed for IT equipment room applications and should be built and tested in accordance with the requirements of the latest revision of ANSI/ASHRAE Standard 127, Method of Testing for Rating Computer and Data Processing Room Unitary Air-Conditioners[26]. CRAC units are available in several types of cooling system configurations, including chilled water, direct expansion air cooled, direct expansion water cooled, and direct expansion glycol cooled. Direct Expansion CRAC Units: The direct expansion (DX) units typically have multiple refrigerant compressors with separate refrigeration circuits, air filters, humidifiers, and integrated control systems with remote monitoring panels and interfaces. Reheat coils are an option. CRAC units may also be equipped with propylene glycol precooling coils and associated drycoolers to permit water-side economizer operation where weather conditions make this strategy economical [4]. Air-cooled direct expansion units extract heat from the room and transfer it to the outside air using air-cooled refrigerant heat exchangers (condensers). Once installed, the room unit and external condenser form an autonomous sealed circuit. The remote condensers used with DX CRAC units include precise electronic fan-speed condensing pressure control to ensure trouble-free operation of the unit throughout the year under a very wide range of external air temperatures. CRAC units with chilled-water coils may also be fitted with DX coils connected to remote outdoor air-cooled condensing units for redundant cooling sources. 41

42 Figure 40: Air Cooled Direct Expansion Unit.Source: Uniflair Chiller Water CRAH (Computer Room Air Handlers) Units. CRAC units utilizing chilled water for cooling do not contain refrigeration equipment within their packaging and generally require less servicing, can be more efficient, provide smaller room temperature variations, and more readily support heat recovery strategies than the equivalent DX equipment. Chiller Water CRAC units use the availability of chilled water to control room conditions. This version of CRAC has a relatively simple construction and outstanding reliability. Careful sizing of the heat exchanger coils yields a high sensible-to-total cooling ratio under most operating conditions at the appropriate chilled water temperatures. 42

43 Figure 41: Chiller Water CRAH Unit. Source: Uniflair Dual-Cool CRAC units : When in the data center s infrastructure location there is a chiller water source, but with a configuration that doesn t ensure the required level of security and continuous operation, this kind of CRAC is a good option. This kind of CRAC unit is fitted with two completely independent cooling circuits: Chilled water Air-cooled or water-cooled direct expansion In this case function priority is given to the chilled water circuit, with the microprocessor control automatically starting direct expansion operation if the chilled water supply fails or if the water is not cold enough to dissipate the entire heat load. Alternatively the unit controls can be set to prioritize direct expansion cooling, activating chilled water operation only in the event of a compressor malfunction. These kind of CRAC units therefore provide a very high level of security; ensuring continuous system operation at all times and with the flexibility to manage the cooling resources in the best way for the particular installation. 43

44 Figure 42: Twin Coil CRAC Unit. Source: Uniflair From the point of view of energy efficiency of cooling system the first step to take to improve the efficiency is to choose a water air based system for the data center and avoid as much as possible the direct expansion.[18] [19] CRAC Location Depending on the Data Center size and typology, CRAC units are placed inside the IT room or remotely located and ducted to the conditioned space (see figures below). Whether they are remote or not, their temperature and humidity sensors should be located to properly control the inlet air conditions to the IT equipment within specified tolerances. Analysis of airflow patterns within the IT equipment room with advanced techniques such as computational fluid dynamics (CFD) may be required to optimally locate the IT equipment, the CRAC units, and the corresponding temperature and humidity sensors. Otherwise, it may be possible that the sensors are in a location that is not conditioned by the CRAC unit they control or in a location that is not optimum and thereby forces the cooling system to expend more energy than required. Figure 43: CRAC Unit Location inside the IT Room 44

45 Figure 44: CRAC Unit Location in technical corridors beside the IT Room Humidity Control Types of available humidifiers within CRAC units may include steam, infrared, and ultrasonic. Thought should be given to maintenance and reliability of humidifiers. It may be beneficial to relocate all humidification to a dedicated central system. Another consideration is that certain humidification methods, or use of improperly treated makeup water, are more likely to carry fine particulates to the space. Reheat is sometimes used in the dehumidification mode when the air is overcooled for the purpose of removing moisture. Sensible heat is introduced to supplement the actual load in the space typically by use of electric, hot water, or steam coils upon a call for reheat. Use of waste heat of compression (hot gas) for reheat may also be available as an energy-saving option. IT facilities should be enclosed with a vapor barrier for humidity control Ventilation In systems using CRAC units, is necessary to introduce outside air through a dedicated system serving all areas. This dedicated system will often provide pressurization control and also control the humidity in the IT equipment room based on dew point, allowing the primary system serving the space to provide sensible-only cooling. The IT Room ventilation consists in introduce between 1 and 2 air changes per hour Overview of CRAC efficiency. ASHRAE TC 9.9 committee recently published the 2008 ASHRAE Environmental Guidelines for Datacom Equipment [10]which extended the temperature-humidity envelope to provide greater flexibility in data center facility operations, particularly with the goal of reducing energy consumption. The recommended temperature limits are from 18ºC to 27ºC. The humidity is limited to less than 60% with the lower and upper dew point temperatures of 5.5ºC and 15ºC. 45

46 These values are expressed in relation to temperatures in the cold aisle. Consequently, it must be considered only in case of room s layout with hot and cold aisles. The CRAC and hardware manufacturer s suggestion is the system operation below 23 C.At this temperature, the server s fans starts to rise their revolutions increasing their power consumption, and consequently the server s one. Figure 45: Variation of fan power requirements with server inlet temperature. Source: ASHRAE By other hand, the temperature control in the chilled water units permits to set these values, so, it is possible to reach higher return temperatures. As an example, a today chiller water plant permits the operation with water temperatures of C, so at air side is possible to reach a return temperature of 33 C to the CRAC unit (IT Room). Relative humidity is still regulated on the return side and its set point. Generally the relative humidity considered with return temperature higher than 30 C is 30%. This last aspect must not be confused with the values given by the Ashrae. In relation to DX units they don t have the same benefit of efficiency that in CW CRAC Units through the temperatures increase, but efficiency in terms of EER can be estimated in nominal conditions around 15%. The manufacturers suggestion in the case of DX units is to remain below 35 C Central Station Air Handling Units (AHUs) Some larger IT facilities use central station AHUs. Specifically, many telecommunications central offices, regardless of size, utilize central systems. There are advantages and disadvantages to the use of central station AHUs. Some aspects of central station AHUs, as they relate to IT facility applications, are contained in this section Coil Selection There is a wide range of heating and cooling coil types that can be used for IT facilities and ideally any coil design/specification should include modulating control. In addition, for dehumidification purposes, cooling coils with close dew point control is very important. Cooling coil control valves should be designed to fail open. For more information on cooling coil design, a good reference is the ASHRAE -HVAC Systems and Equipment. 46

47 Humidification Central station humidification systems used for IT facility applications can be of various types. Since humidification costs can be significant, analysis of operating costs using site specific energy costs should be made (ref Herrlin 1996) Part-Load Efficiency and Energy Recovery Central station supply systems should be designed to accommodate a full range of loads in IT equipment areas with good part-load efficiency. Due to their larger capacity, central station supply systems may be able to provide more efficient energy recovery options than CRAC units; use of rotary heat exchangers or cross connected coils should be considered Flexibility/Redundancy Using VAV Systems Flexibility and redundancy can be achieved by using variable-volume air distribution, oversizing, cross-connecting multiple systems, or providing standby equipment. Compared to constant-airvolume units (CAV), variable-air-volume (VAV) equipment can be sized to provide excess capacity but operate at discharge temperatures appropriate for optimum humidity control, minimize operational fan horsepower requirements, provide superior control over space temperature, and reduce the need for reheat. Common pitfalls of VAV, such as a shift in underfloor pressure distribution and associated flow through tiles, should be modeled using CFD or other analytical techniques to ensure that the system can modulate without adversely affecting overall airflow and cooling capability to critical areas. Variable airflow strategies need to take into consideration the need to get sufficient static pressure to the most limiting rack when operating at minimum flow. Using large centralized air handlers offer efficiency improvements from larger equipment while accommodating a number of controls and configuration efficiency opportunities. The of best practice scenario of the cooling system with Large Centralized Air Handlers has to consider [18] : Use Load Diversity to Minimize Fan Power Use Optimize Air Handler for Fan Efficiency and Low Pressure Drop Configure Redundancy to Reduce Fan Power Use in Normal Operation Use Premium Efficiency Motors and Fans Control Volume by Variable Speed Drive on Fans Based on Space Temperature Depending where the data center is built, the weather conditions can be profited by the cooling system. The typical datacenter load profile is ideally suited for cooling with outdoor air during much of the year, particularly at night, when datacenters still require significant cooling. When free cooling system is viable, it is important to optimize the airflow and temperatures set points. This technique allows working with higher delta T and less airflow than in conventional cooling system design with water production and precision CRAC units Humidification. Humidification specifications and systems have often been found to be excessive and/or wasteful in datacenter facilities. A careful, site specific design approach to these energy intensive systems is usually needed to avoid energy waste. 47

48 Due to low human occupancy, not much humidity is internally generated in a data center. This provides an opportunity, for data centers of a significant size, to centrally control the humidity in an efficient manner. Dew-point control, rather than relative humidity control, is often the control variable of choice. The primary advantage of central humidity control is to avoid the simultaneous side-by-side humidification and dehumidification that can often be found in a poorly commissioned data center with multiple CRACs or other cooling units providing this function. If dehumification (in addition to humidification) is handled centrally, another advantage is that the cooling coils on the floor of the facility can run dry. This allows for chilled-water reset (if deemed appropriate at partload operation) without increased relative humidity. Humidity control costs will also be reduced, in the long run, with the use of high-quality humidity sensors. Maintenance of the proper control range will be a side benefit that is obviously of importance in a critical facility. The maintenance of a proper dead band for humidity control is also important to energy efficiency. The current range recommended by ASHRAE (40% 55% at the IT equipment inlets) should be wide enough to avoid having to humidify. High relative humidity may cause various problems to IT equipment. Such problems include conductive anodic failures (CAF), hygroscopic dust failures (HDF), tape media errors and excessive wear, and corrosion. In extreme cases, condensation can occur on cold surfaces of direct-cooled equipment. Low relative humidity increases the magnitude and propensity for electrostatic discharge (ESD), which can damage equipment or adversely affect operation. Tape products and media may have excessive errors when exposed to low relative humidity. There are three primary energy implications related to the humidity of a data center facility: Impact of humidity level on the energy cost of humidification and dehumidification allowed in a facility. Impact of relative humidity set point on dehumidification and reheat costs. Impact of the humidity dead band on energy cost. The current recommended relative humidity range for data centers should minimize maintenance and operational problems related to humidity (either too low or too high), but there is an energy cost associated with maintaining the environment in this range.[3, 5, 7](See section 2) Plant Optimization. When is necessary the water production and air free cooling result unfeasible, a chilled water plant is used, so the primary objective is to maximize the chiller system efficiency. As it has been mentioned before, cooling equipment comprises the higher electrical consumption due to the data center infrastructure (except the proper IT equipment electric load). 48

49 While a whole-plant approach must be taken to achieve the most efficient chilled water plant solution, the chiller is a very large energy consumer and should be selected to minimize its energy consumption. The two types of mechanical cooling equipment typically used to cool data centers are water chillers and small refrigerant compressors. The choice of equipment can be related to a number of factors, but the predominant factor is size, The compressor units are frequently used in small facilities and chilled-water systems in larger facilities. Chillers are typically one of the largest users of energy in the data center facilities. In both air and water sides, can be established a balance between these parameters that optimizes the electrical consumption of the complete cooling system. If it is possible to increase the air side delta T in the room, this would raise the EER of the unit caused by the reduction of the ventilation needs. By other hand, the increasing of the water s temperature production (cold), the EER of the chiller rises too. At the same time, the raising of the water delta T in the chiller, it would be necessary to produce less water flow to obtain the same power, so there are earnings in pumping. So, to improve the energy efficiency of the cooling system it is important to work with higher temperatures.[27] As such, efforts to maximize the energy efficiency of a chiller, or to minimize hours of operation, can go a long way in terms of reducing total operation costs. The parameters that affect chiller efficiency to take into consideration are [3, 5, 7, 9]: Type and size of chiller Chilled-water supply temperature. Chilled-water differential temperature. Entering condenser-water temperature. Condenser-water differential temperature. Part-load efficiency as a function of compressor VFD drive. When chiller plants are selected, during the design, it is a good idea to examine the expected fullload and part-load design efficiency of chillers and refrigerants from several vendors to determine the best choice for a project. The thermodynamic efficiency of a chiller is sensitive to the difference in temperature between the chilled water and the condenser water. 49

50 For data centers, chilled water temperature higher than 7 C should be considered to take advantage of the efficiency gains. IT equipment air inlet temperatures as high as 25 C are still within the recommended operating range of almost all IT equipment, and since internal sources of humidity are low, small low-temperature coils in the facility can probably handle the dehumidification load of the facility, allowing the primary cooling coils to operate dry at a higher temperature. From an energy-efficiency standpoint, therefore, it is probably advantageous to increase the differential temperature across the chiller, due to the decrease of water flow needed to obtain the same cooling power, as pumping costs will be reduced with little not much effect on chiller energy consumption. The system designer needs to obtain product-specific curves for their equipment to optimize the chiller working and design set points. Chillers with variable-speed compressors have been available for more than 20 years, and are a well-established method to achieve high energy efficiency at part load operation. Another strategy for maintaining good part-load performance is to have multiple chillers, staged as needed to match the data center facility cooling load. Figure 46: Variation of chiller efficiency with leaving chilled water temperature. Source: ASHRAE As general aspects to consider while is designed the data center cooling system are: Select Chiller for High Efficiency. Implement an aggressive condenser water Reset. Minimize tower fan power and size towers for close approach. The great majority of datacenter cooling is all sensible heat there is very little dehumidification required. 50

51 Chiller performance improves when higher temperature water is produced, for example a typical centrifugal chiller's efficiency is 15-25% better when producing 12 C chilled water versus 7 C chilled water. Use primary only variable flow chilled water pumping. Consider Thermal Storage. Monitor System Efficiency Right size the Cooling Plant. The design should recognize that the standard operating condition will be at part load and optimize for efficiency accordingly. Use Free Cooling / Waterside Economization. The unusual nature of a datacenter load, which is mostly independent of outside air temperature and solar loads, makes free cooling very attractive and increases the importance of efficiency over first cost. Also, the typical level of redundancy and reliability can influence the value of various design options. As an indicator of the variability of efficiencies in practice, benchmarking results of real data centers in the field found that HVAC energy use ranged from 21% to 54% of the total Liquid Cooling. Most current large data center designs use chilled air from air handling units (CRAC or centralized) to cool IT equipment. With rack heat loads increasing, many data centers are experiencing difficulty in cost-effectively meeting IT equipment s required inlet air temperatures and airflow rates, especially with the HW high density trends. This situation is creating a need for implementing liquid cooling solutions. The overall goals of the liquid implementations are to transfer as much waste heat to the facility water as possible and, in some of the implementations, to reduce the overall volume of airflow needed by the racks. In addition, implementation of liquid cooling may be required to achieve higher performance of the IT equipment through lower temperatures achieved with the cooling of microprocessors. Therefore, more and more facilities are implementing liquid cooling, and others are considering it. Liquid cooling is defined as the case where liquid must be circulated to and from the entity for operation [6]. For instance, a liquid cooled rack defines the case where liquid must be circulated to and from the rack for operation. This definition can be expanded to liquid cooled IT equipment and liquid cooled electronics. Liquid cooled applications can deploy open or closed cooling architectures, or single or two phase cooling Liquid Cooling Systems 51

52 Open versus closed architecture indicates whether electronics in the cabinet are exposed to the rooms ambient, a closed system can be the case where the rack is closed and the cooling air is continuously circulating within the cabinet. Figure 47: Liquid Cooling Device. Source: Rittal An example of an open system is a rack with a rear door heat exchanger. Cool room air enters the front of the cabinet, absorbs heat as it passes over the electronics, then it passes through the heat exchanger in the rear door before it exits. In this scenario the liquid is removing some or all of the heat from the cabinet and relieving the load on the room air conditioning system. Open-cooling architecture is available in three basic configurations and is the more employed today. The configurations are as follows: Rear Door Exchanger: In the last four years several HW manufactures like IBM, have developed solutions to cool the rack locally. The heat exchanger is a water-cooled device that is mounted on the rear of a rack to cool the air that is heated and exhausted by devices inside the rack. A supply hose delivers chilled, conditioned water to the heat exchanger. A return hose delivers warmed water back to the water pump or chiller. This is referred to as a secondary cooling loop. The rack on which you install the heat exchanger can be on a raised floor or a non-raised floor.[28] The hot air exiting the servers internal to the rack passes through a heat exchanger that removes the heat from the air prior to it entering the open aisle. This configuration in some cases eliminates the need for a hot aisle/cold aisle arrangement, as the heat can be fully neutralized before the air enters the aisle. 52

53 Figure 48: Example of liquid system. Source: ASHRAE Figure 49: Rear Door Heat Exchanger (RDHx) Device Performance. Source: IBM Some systems are designed to utilize the server fans to transport the air through the heat exchanger, while others have their own fan mechanisms. With this kind of solutions the cooling system energy efficiency is improved around a 30%. In the figure below indicates de variation of the heat removal by this device with the water temperature and flow. The rear door exchanger is the optimal solution for cooling racks from 12 to 45 kw. In Row: A heat exchanger/fan unit collects the heat from the air in the rear of the enclosures, cools it, and then transmits it into the cold aisle. By proper placement of the units, the cold aisle is effectively pressurized, preventing hot air from migrating into the cold aisle. This device gives until 30 kw of power and it is installed between two racks. 53

54 Figure 50: Examples of Inrow devices. In Row with hot aisle enclosure: The hot aisle is contained via mechanical barriers (hot aisle enclosure). This prevents the hot air from escaping to the cold aisles and allows for a much higher hotaisle temperature, thus increasing the potential capacity of the heat exchange units. Heat exchanger/fan units collect this hot air, cool it, and then transmit it into the cold aisle. Figure 51: DC Cooling by Inrow devices and hot aisle containment. Source: APC In a closed design, air is circulated through the electronics, passes through an air to liquid heat exchanger located in the cabinet, and is then returned back to the electronics. In this case the liquid in the heat exchanger absorbs all of the heat, and the recycled air meets the incoming air temperature specifications for the IT equipment.[6](see figure below). 54

55 Figure 52: Closed coupled liquid cooling system. Source : ASHRAE Liquid cooled cabinets provide a close coupled system that places the liquid heat removal medium as close to the heat sources as practical, which can be designed to be quite efficient. Bypassing an air-cooling loop at the facility level saves on fan energy, and may increase the discharge temperature of the heat rejected to the environment, allowing for increased use of economizer cycles. By the other hand, other classification includes the selection single (just liquid cooling) or two phase cooling (conventional system based on air from CRAC units and the installation at rack level of a rear door exchanger) and the type of refrigerant utilized. Next figure shows the result of the study made by Silicon Valley Leadership Group [29]. This project consisted on verify that modular construction could solve observed problems of inefficient operation when data centers are not fully occupied, by distributing power more economically or cooling high density IT equipment more efficiently. LBNL initiated the study to investigate energy implications of commercially available modular cooling systems compared to that of traditional data centers. The best practice scenario for cooling this sort of rooms is the result of this study and is shown in the figure below: 55

56 Figure 53: DC Cooling Solution comparison. Source: Silicon Valley Leadership Group Coolants The most coolant employed today is the water. To see other kind of refrigerant that can be used in liquid cooling applications refer to ASHRAE Liquid Cooling Guidelines for Datacom Equipment Centers [6]. Water cooled (or glycol-cooled) systems have a number of important parameters that can be optimized for energy efficiency. Most energy consumption in water cooled systems can be placed into four broad categories. These categories are: Pumping Power Fan Energy (To drive the air side of any air to liquid heat exchangers) Chilled Water Production Efficiency Heat Rejection Different refrigerants will also typically have different efficiencies, and some may be better suited to the specific operating temperature range of a facility than others. Higher temperatures may preclude the use of certain refrigerants due to the higher operating pressure of the refrigerants. Optimizing the working parameters of the system it can be reached a significant level of energy efficiency related to the cooling system. 56

57 The step between future cooling systems, based on single phase liquid cooling (without air contribution) and the standard design (air impulsion to ambient of IT room), is the two phase liquid cooling that represents the employ of both systems. The percentage of contribution by each kind of cooling system, depend ultimately on the IT equipment power density. As higher is the equipment power density takes on a higher relevance the liquid cooling installation, not just for energy efficiency, also for the IT equipment correct cooling. New data centers designed today implement partially liquid cooling with a combination of just air and liquid cooling technologies (two phase). As mentioned before, the percentage of cooling covered by each technology depends on the number of IT high density racks housed on the data center and the future perspectives of HW growing and acquisition. However, for existing facilities that has no more thermal capacity, or are out of floor space with no room for expansion, this solution may be the only alternative Computer Fluid Dynamics. The best way to design an optimized airflow management program is through the use of a Computational Fluid Dynamics (CFD) study. This level of experience gives our engineers a decided advantage as to the best way to optimize hot and cold air separation. There is a key difference between physical measurements and field-testing that is not only time and labor intensive but sometimes impossible to fully achieve. In such a situation, CFD simulations provide a feasible alternative for testing various design layouts and configurations in a relatively short time. CFD simulations can predict the air velocities, pressure, and temperature distribution in the entire data center facility. They can be used, for example, to locate areas of recirculation and short-circuiting or to assess airflow patterns around the racks. Facilities managers, designers, and consultants can employ these techniques to estimate the performance of a proposed layout before actually building the facility. CFD simulations can also provide appropriate insight and guidance in reconfiguring existing facilities toward the same goal of optimizing the facility s cooling system. Computational fluid dynamics (CFD) analysis can be an effective tool for the optimization and troubleshooting of airflow. As mentioned before, an energy-efficient layout of a data center depends on several factors, including the arrangement and location of various data center components including the positioning of air-conditioning systems and positioning of racks and other components. CFD analysis, which is based on sound laws of physics, can help visualize airflow patterns and resulting temperature distribution in a data center facility. Any mixing and shortcircuiting airflow patterns can be visualized through the airflow animations based on CFD analysis instead of intuitive imagination. Optimization of data center layout and equipment selection through CFD analysis at the design evaluation phase can help in avoiding future problems related to poor airflow distribution and resulting hot spots in the data center facility. CFD analysis can provide a detailed cooling audit report of the facility, including the performance evaluation of individual components, including airconditioning units, racks, perforated floor tiles, and any other supplementary cooling systems in the facility. Such reports can help in the optimization, selection, and placement of various components in the facility. 57

58 For example, such CFD-analysis-based cooling audit reports of cooling systems can predict relative performance of each unit, indicating over- or underperformance with respect to other units on the floor, which can affect the energy efficiency and total cost of ownership long term Cooling Design Best Bractices In the case of normal load density racks (see section 4.1.1) is not frequent to find the hot spots always that the rack load is less or equal to the design value. In this case, the IT room is well cooled with air by under floor plenum with CRACs system. Depending on the power diversity of the racks can be necessary the installation of enclosures in the cold or hot aisle, principally to stop the hot spots caused by the insufficient airflow that arrives to the perforated tile that belongs to it.( See section 4.1.2) Usually the IT rooms are designed for a supposed constant heat load because in the moment of dimensioning the facilities it is necessary to establish a medium load per rack. The reality of an IT room operating is very different. The thermal dissipation of several racks usually varies and sometimes exceeds the design value for it. Actually, the usual estimation per rack is around 5 kw (IT room power density 2000 W/m 2 ). In the moment that the IT room houses racks with more than 5 kw of power, in the region where they are placed usually doesn t arrive enough airflow to compensate the thermal load of 5 kw for a delta T of K between the front and the back of the rack, or the value between the under floor air temperature impulsion and return to CRAC unit (normal delta T in precision cooling devices). When the load exceeds this 5 kw it is normal to register hot spots on these racks. The easiest way to optimize the cooling on these situations is to stop physically the airflow recirculation (see section 4.1.3).In those cases is necessary the installation of enclosures in the hot or cold aisles. The effect achieved with the enclosures optimizes the airflow management as previously remarked. Due to the complexity of the thermal behavior of IT equipment, usually it is necessary to run CFD simulations to check the correct design of cooling devices and air distribution of the IT Room. These kinds of simulations are useful to optimize the rack and CRAC correct layout from the point of view of air management and energy efficiency (see section 4.1.7). Actually is frequent to find data centers that houses several high density racks. As of 45 kw per rack it begins to be inadequate the cooling with the explained configurations, new open challenge in closed loop architectures are being deployed. 4.3 Electrical System The electrical infrastructure is one of the main facilities of the data center due to its operation the needs. Next block diagram represents a general scheme of a data center electrical system. 58

59 Figure 54: Example of electrical system in a Data Center. Source: ASHRAE The protection from power loss is a main characteristic of datacenter facilities. Such protection comes at a significant first cost price, and also carries a continuous power usage cost that can be reduced through careful design and selection. The IT equipment must be running the 8760 hours per year in the most of the cases. This aspect implies that the electric supply must be present the 100% of the time. When failure occurs a uninterruptible power supply supports the time necessary to start up the emergency power supply (generally this facility consists in a well supported Engine Generator). There is variation in the level of electrical distribution losses in a data center, depending on the equipment installed and the configuration of the distribution system. The main electrical distribution loss for the data centers documented in the LBNL study was an average 8% loss for uninterruptible power supply (UPS) equipment. The electrical distribution requires careful planning to reduce impedances and heat load.[7] As general points to be considered to be more energy efficient designing and operating a data center are listed below [5, 19] : Design UPS system for efficiency. The electrical design impacts the load on and the ultimate efficiency achieved by the UPS. Maximize Unit Loading. Use of multiple smaller units can provide the same level of redundancy while still maintaining higher load factors, where UPS systems operate most efficiently. Average UPS loading. Battery based UPSs should be loaded to 50% or greater in actual operation. Select most efficient UPS possible. UPS efficiency should exceed 90% at full load and 86% at half load. Specify Minimum Unit Efficiency at Expected Load Points. 59

60 Figure 55: UPS Efficiency variation with workload. Source: APC Evaluate UPS Technologies for Most Efficient. Do not over specify Power conditioning requirements. In general, the greater the level of power conditioning used, the lower the system efficiency. Eliminate Standby Generator. Standby generators are typically specified with jacket and oil warmers that use electricity to maintain the system in standby at all times, so especially if they are never used they are a constant energy waste. Eliminate of a standby generator requires proper engineering of the system, but will reduce energy use. Review the need for energy storage using batteries or flywheels, or a combination of them, as individual configurations may represent savings on operational efficiency or total cost of ownership. Redundancy should be used only up to the required level. Additional redundancy comes with an efficiency penalty. Select transformers with a low impedance, but calculate the fault currents and install the proper over-current protection. Consider harmonic minimization transformers. Limit long conductor runs at the lowest IT equipment utilization voltage by installing the PDUs as close to the rack load as possible. By other hand, the technical rooms where the electric systems are placed must be refrigerated. The cooling system requirements to be considered when the technical rooms are sized and designed are described in detail in ASHRAE Thermal Guidelines for Data Processing Environments [3]. Electrical power distribution equipment can typically tolerate more variation and a wider range of temperature and humidity than IT equipment. Equipment in this category includes incoming service/distribution switchgear, switchboard, automatic transfer switches, panelboards, and transformers. The technical characteristics of each equipment should be checked to determine the amount of heat dissipated and design conditions for satisfactory operation. 60

61 Building, electrical, and fire codes should be checked to identify when equipment must be Enclosed (for example transformers) to prevent unauthorized access or housed in a separate room. 4.4 Lighting Datacenters are typically lightly occupied. While lighting is a small portion of the total power usage of a datacenter, it can be often be reduced through inexpensive technologies and designs. Use active sensors to shutoff lights when datacenter is unoccupied. Reduce lighting power use and waste heat generation. Occupancy Sensors. Occupancy sensors can be a good option for datacenters that are infrequently occupied. Thorough area coverage with occupancy sensors or an override should be used to insure the lights stay on during installation procedures when a worker may be 'hidden' behind a rack for an extended period. Design light circuiting and switching to allow for greater manual control. Bi-level Lighting. Provide two levels of clearly marked, easily actuated switching so the lighting level can be easily changed between normal, circulation space lighting and a higher power detail work lighting level. Task Lighting. Provide dedicated task lighting specifically for installation detail work to allow for the use of lower, circulation space and halls level lighting through the datacenter area. 4.5 Commissioning and Retrocommissioning An efficient datacenter not only requires a reliable and efficient design, it also requires proper construction and operation of the space. Commissioning is a methodical and thorough process to ensure the systems are installed and operating correctly in all aspects, including efficiency. [5] The first step to ensure that commissioning is done in a thorough manner is to engage additional design expertise (consultant) for review and guidance. Design recommendations from a designer not directly involved in the project details and/or the assistance of a dedicated commissioning agent can greatly improve the final quality of the facility. Commissioning is a major task that requires considerable management and coordination throughout the design and construction process. To perform commissioning system it is necessary to ensure that all systems and control sequences, including ones that are only relevant to efficient operation, are installed per design. The basis to make a data center commissioning plan consists in establish several strategies [19]: Document Testing of All Equipment and Control Sequences. Develop a detailed testing plan for all components. The plan should encompass all expected sequence of operation conditions and states. 61

62 Measure Equipment Energy Efficiency Onsite. Measure and verify that major pieces of equipment meet the specified efficiency requirements. Chillers in particular can have seriously degraded cooling efficiency due to minor installation damage or errors with no outward symptoms, such as loss of capacity or unusual noise. Provide appropriate budget and scheduling for Commissioning. Commissioning is a separate, non-standard, procedure that is necessary to ensure the facility is constructed to and operating at peak efficiency. Perform full Operational Testing of All Equipment. Commissioning testing of all equipment should be performed after the systems installation is complete, immediately prior to occupancy. Normal operation and all failure modes should be tested. In many critical facility cases, the use of load banks to produce a realistic load on the system is justified to ensure system reliability under design conditions. Commissioning activities have been characterized into five broad categories, or levels. Level 1 through Level 3 commissioning is to a large extent focused on the component, assembly, and equipment aspect and ensuring they are procured, received, and installed in accordance with the design documents. Level 4 commissioning is commonly referred to as site acceptance testing, and Level 5 commissioning is called integrated systems testing. These levels are summarized and explained in the ASHRAE Design Considerations for IT Equipment Centers [5]. Sometimes is necessary to verify that the system is operating as designed and perform retrocommissioning is essential. Many older datacenters may have never been commissioned, and even if they had performance degrades over time. Perform a full commissioning and correct any problems found. Where control loops have been overridden due to immediate operational concerns, such as locking out condenser water reset due to chiller instability, diagnose and correct the underlying problem to maximize system efficiency, effectiveness, and reliability. As a rule, a thorough retrocommissioning will locate a number of sensors in where efficiency can be improved. Installing efficiency monitoring equipment is fundamental to reach energy savings. A number of simple metrics (cooling plant kw/ton, economizer hours of operation, humidification/dehumidification operation, etc.) should be identified and continuously monitored and displayed to allow facilities personnel to recognize when system efficiency has been compromised. 62

63 5. Data Center Energy Efficiency Metrics Datacenter power and cooling are two of the biggest issues facing IT organizations today, and growing companies need a way to control these costs while enabling future expansion. With more efficient datacenters, IT organizations can better manage increased computing. A global consortium of IT companies and professionals called Green Grid is seeking to improve energy efficiency in data centers and computing business. This organization aims to unify global industry efforts to standardize on a common set of metrics, processes, methods and new technologies to further its common goals. The need to define and measure energy efficiency is widely recognized. In order to quantify the energy efficiency of DCs, a number of different metrics are being used. Although many organizations have called for standardization, two competing metrics are in wide use: DC infrastructure efficiency (DCiE) and power usage effectiveness (PUE). The PUE is the inverse value of the DCiE. The Green Grid has been developing publications that explain its calculation [30-33]. As resume, the PUE and DCiE provide a way to determine: Opportunities to improve a datacenter s operational efficiency. How a datacenter compares with competitive datacenters. If the datacenter operators are improving the designs and processes over time. Opportunities to repurpose energy for additional IT equipment. DCiE is defined as the ratio of IT Equipment Power and Total Facilities Power as shown in Equation 4 below. The Total Facility Power is defined as the power measured at the utility meter the power dedicated solely to the datacenter (this distinction is important in mixed-use buildings that house data centers as one of a number of functions). The IT Equipment Power is defined as the power consumed by equipment that is used to manage, process, store or route data within the compute space. It is important to understand the components for the loads in the metrics, which can be described as follows: 1. IT Equipment Power: This includes the load associated with all of the IT equipment like compute, storage and network equipment, along with supplemental equipment i.e. KVM switches, monitors, and workstations/laptops used to monitor or otherwise control the data center. 2. Total Facility Power: This includes all IT Equipment power as described in number 1 above plus everything that supports the IT equipment load such as: a. Power delivery components i.e. UPS, switch gear, generators, PDUs, batteries and distribution losses external to the IT equipment b. Cooling system components i.e. chillers, computer room air conditioning units (CRAC s), direct expansion air handler (DX) units, pumps, and cooling towers c. Other miscellaneous component loads such as datacenter lighting 63

64 Figure 56: DCiE Expression. Source: the Green Grid The intent of DCiE is to assist decision makers for data center operations, IT or facilities in the effort to improve data center efficiency. As with any data point, this is only one part of the entire data center picture. DCiE is valuable to monitor changes in one data center at an aggregated level. It can also help to identify large differences in efficiency in similar data centers, though further investigation is required to understand why these variations exist. It is the first step to better understand a data center s efficiency. Subsequent investigation is required to determine the best approach for additional improvement in data center s efficiency. Figure 57: PUE breakdown main components. Source: The Green Grid 64

65 65 Figure 58: Method for Estimating HVAC Power. Source: The Green Grid

BEST PRACTICES. Data Center Best Practices Guide Energy efficiency solutions for high-performance data centers

BEST PRACTICES. Data Center Best Practices Guide Energy efficiency solutions for high-performance data centers BEST PRACTICES Data Center Best Practices Guide Energy efficiency solutions for high-performance data centers Table of Contents Introduction...1 1. Information Technology (IT) Systems...3 2. Air Management...10

More information

High Density Cooling Solutions

High Density Cooling Solutions Racks & Integrated Cabinets High Density Cooling Solutions For Business-Critical Continuity TM In Energy Efficient Data Centers Green-IT Full-On Trend! Technology Solutions Of Emerson Network Power Are

More information

Data Center Energy Efficiency

Data Center Energy Efficiency THE INFORMATION SOURCE FOR THE DATA CENTER INDUSTRY EXECUTIVE GUIDE SERIES PART 3 Data Center Energy Efficiency by Julius Neudorfer August 2012 This is the third of a six part series of our Executive Guide

More information

Report to Congress on Server and Data Center Energy Efficiency Public Law 109-431. U.S. Environmental Protection Agency ENERGY STAR Program

Report to Congress on Server and Data Center Energy Efficiency Public Law 109-431. U.S. Environmental Protection Agency ENERGY STAR Program Report to Congress on Server and Data Center Energy Efficiency Public Law 109-431 U.S. Environmental Protection Agency ENERGY STAR Program August 2, 2007 Table of Contents Table of Contents... i Executive

More information

Calculating Space and Power Density Requirements for Data Centers

Calculating Space and Power Density Requirements for Data Centers Calculating Space and Power Density Requirements for Data Centers White Paper 155 Revision 0 by Neil Rasmussen Executive summary The historic method of specifying data center power density using a single

More information

The Four Trends Driving the Future of Data Center Infrastructure Design and Management

The Four Trends Driving the Future of Data Center Infrastructure Design and Management A White Paper from the Experts in Business-Critical Continuity TM The Four Trends Driving the Future of Data Center Infrastructure Design and Management Introduction Throughout the first decade of the

More information

VOLUME 1. Physical infrastructure for a scalable, adaptable, efficient, reliable, predictable data center with 5 to 100 racks

VOLUME 1. Physical infrastructure for a scalable, adaptable, efficient, reliable, predictable data center with 5 to 100 racks VOLUME 1 Optimized for Small and Medium Data Centers Physical infrastructure for a scalable, adaptable, efficient, reliable, predictable data center with 5 to 100 racks 2012 Schneider Electric. All Rights

More information


DATA CENTER STRATEGIES DATA CENTER STRATEGIES Simplifying high-stakes, mission critical decisions in a complex industry. July 2011 John Rath Rath Consulting Brought to you by Executive Summary Business climate changes, business

More information



More information

Power and Cooling for VoIP and IP Telephony Applications

Power and Cooling for VoIP and IP Telephony Applications Power and Cooling for VoIP and IP Telephony Applications By Viswas Purani White Paper #69 Executive Summary Voice Over IP (VoIP) deployments can cause unexpected or unplanned power and cooling requirements

More information

A Design Guidelines Sourcebook January 2011. High Performance Laboratories

A Design Guidelines Sourcebook January 2011. High Performance Laboratories A Design Guidelines Sourcebook January 2011 High Performance Laboratories Table of Contents Introduction...1 1. Optimizing Ventilation and Air Change Rates...3 2. Low Pressure Drop Design...18 3. Eliminating

More information


SMART METERING & INFRASTRUCTURE PROGRAM BUSINESS CASE SMART METERING & INFRASTRUCTURE PROGRAM BUSINESS CASE 1 TABLE OF CONTENTS Executive Summary... 1 Introduction................................................... 3 What is the Smart Metering Program?............................

More information

Energy Logic: Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems

Energy Logic: Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems Energy Logic: educing Data Center Energy Consumption by Creating Savings that Cascade Across Systems A White Paper from the Experts in Business-Critical Continuity The model demonstrates that reductions

More information

Climate Change Toolkit 03 Principles of Low Carbon Design and Refurbishment

Climate Change Toolkit 03 Principles of Low Carbon Design and Refurbishment Climate Change Toolkit 03 Principles of Low Carbon Design and Refurbishment Cover image Oxley Woods. Rogers Stirk Harbour + Partners design for the Oxley Woods houses utilises a unique EcoHat. Essentially

More information



More information


LABORATORY DESIGN HANDBOOK LABORATORY DESIGN HANDBOOK UNDERSTANDING, ACCELERATED LABORATORY DESIGN HANDBOOK TSI, TSI logo, are registered trademarks of TSI Incorporated. Copyright 2014 by TSI Incorporated This handbook is not intended

More information

Fujitsu Insights Server Virtualization and Private Clouds

Fujitsu Insights Server Virtualization and Private Clouds Fujitsu Insights Server Virtualization and Private Clouds Nowadays planning horizons are shorter, revenue streams are uncertain, and you have to be flexible to survive within your business. Among others

More information

Passive Design Toolkit

Passive Design Toolkit City of Vancouver Passive Design Toolkit Message from the Mayor Vancouver City Council has taken an important first step toward our goal of becoming the greenest city in the world, as the first jurisdiction

More information


Maryland RESILIENCY THROUGH MICROGRIDS TASK FORCE REPORT Maryland RESILIENCY THROUGH MICROGRIDS TASK FORCE REPORT EXECUTIVE SUMMARY Recognizing the value that microgrids can offer to energy surety and resiliency, on February 25, 2014, Governor Martin O Malley

More information

Data Center 2025: Exploring the Possibilities

Data Center 2025: Exploring the Possibilities Data Center 2025: Exploring the Possibilities Welcome to Data Center 2025 An industry-wide initiative, spearheaded by Emerson Network Power, to create a vision or multiple visions of the future of the

More information

2008 ASHRAE Environmental Guidelines for Datacom Equipment -Expanding the Recommended Environmental Envelope-

2008 ASHRAE Environmental Guidelines for Datacom Equipment -Expanding the Recommended Environmental Envelope- 2008 ASHRAE Environmental Guidelines for Datacom Equipment -Expanding the Recommended Environmental Envelope- Overview: The current recommended environmental envelope for IT Equipment is listed in Table

More information

Business Continuity Planning

Business Continuity Planning Business Continuity Planning Padmavathy Ramesh Technology Review#2002-4 Business Continuity Planning Padmavathy Ramesh July 2002 Business Continuity Planning Padmavathy Ramesh Copyright 2002 Tata Consultancy

More information

Green-Cloud: Economics-inspired Scheduling, Energy and Resource Management in Cloud Infrastructures

Green-Cloud: Economics-inspired Scheduling, Energy and Resource Management in Cloud Infrastructures Green-Cloud: Economics-inspired Scheduling, Energy and Resource Management in Cloud Infrastructures Rodrigo Tavares Fernandes Instituto Superior Técnico Avenida Rovisco

More information

Network Assessment Report

Network Assessment Report Widgets, Inc. Network Assessment Report Prepared by: Tech Name, Senior Enterprise Engineer Date: February 10, 2012 The Network Support Company 7 Kenosia Avenue, Suite 2B Danbury, CT 06810 (203) 744-2274

More information

Cyber Security and Reliability in a Digital Cloud

Cyber Security and Reliability in a Digital Cloud JANUARY 2013 REPORT OF THE DEFENSE SCIENCE BOARD TASK FORCE ON Cyber Security and Reliability in a Digital Cloud JANUARY 2013 Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics

More information

BUSINESS ASSURANCE VIEWPOINT REPORT. Saving energy today for a brighter tomorrow JUNE 2015 SAFER, SMARTER, GREENER


More information

General Principles of Software Validation; Final Guidance for Industry and FDA Staff

General Principles of Software Validation; Final Guidance for Industry and FDA Staff General Principles of Software Validation; Final Guidance for Industry and FDA Staff Document issued on: January 11, 2002 This document supersedes the draft document, "General Principles of Software Validation,

More information

Table of Contents 1. INTRODUCTION... 1

Table of Contents 1. INTRODUCTION... 1 Pa mphl e t73 At mo s p h e r i cmo n i t o r i n g Eq u i p me n tf o rch l o r i n e Ed i t i o n7 J une2003 Table of Contents 1. INTRODUCTION... 1 1.1 PURPOSE... 1 1.2 RESPONSIBLE CARE... 1 1.3 BACKGROUND...

More information

Standards for Internal Control

Standards for Internal Control Standards for Internal Control in New York State Government October 2007 Thomas P. DiNapoli State Comptroller A MESSAGE FROM STATE COMPTROLLER THOMAS P. DINAPOLI My Fellow Public Servants: For over twenty

More information

Duke Energy: Developing the communications platform to enable a more intelligent electric grid

Duke Energy: Developing the communications platform to enable a more intelligent electric grid Duke Energy: Developing the communications platform to enable a more intelligent electric grid David Masters Manager, Technology Development Duke Energy February 1, 2011 Certain

More information