1 WHITE PAPER #30 QUALITATIVE ANALYSIS OF COOLING ARCHITECTURES FOR DATA CENTERS EDITOR: Bob Blough, Emerson Network Power CONTRIBUTORS: John Bean, APC by Schneider Electric Robb Jones, Chatsworth Products Inc. Mike Patterson, Intel Rich Jones, Rich Jones Consulting Rob Salvatore, Sungard
2 PAGE 2 Executive Summary Many different cooling architectures exist today that can be used to cool data centers. Each of these architectures has its own advantages and disadvantages that can have a major impact on energy efficiency as well as on other aspects of the facility. This white paper offers a qualitative comparison among the popular architectures available today, providing readers with insight to help determine the cooling architecture that most appropriately fits their data center strategy.
3 PAGE 3 Table of Contents I. Introduction... 3 II. Assumptions... 6 III. Definitions... 6 IV. Airflow Management Strategies... 8 V. Equipment Placement Strategies VI. Heat Rejection Strategies VII. Conclusion VIII. References IX. About The Green Grid I. Introduction The Green Grid (TGG) is a global consortium of companies, government agencies, and educational institutions dedicated to advancing energy efficiency in data centers and business computing ecosystems. The Thermal Management Working Group within TGG seeks to provide the industry with increased awareness of data center fundamental relationships, particularly when those relationships can be used to increase energy efficiency and reduce a data center s total cost of ownership (TCO). Of particular benefit for energy efficiency is a proper understanding of data center cooling architectures. This white paper covers the most common cooling architectures used for data center applications and provides a qualitative assessment of each architecture. Similar qualitative information for common power configurations can be found in TGG White Paper #4, Qualitative Analysis of Power Distribution Configurations for Data Centers. 1 In order to better present a qualitative comparison of the commonly deployed cooling architectures, they have been organized into three groups. These are comprised of three different and distinct attributes to choose from, but, in a number of instances, two or more can be used together to achieve highly energy-efficient results. The three cooling architecture strategy groups are: Airflow management strategies. Taking into account your anticipated equipment airflow 1
4 PAGE 4 requirements, delivery methods, and room layout, how do you want to manage airflow? Open Partially contained Contained Equipment placement strategies. Based on the physical room layout and building infrastructure, where is the optimum location(s) to place the cooling equipment? Cabinet Row Perimeter Rooftop/building exterior Heat rejection strategies. Considering the maximum equipment densities and projected overall cooling requirements, what are the viable options for rejecting heat? Chilled water Direct expansion (DX) refrigeration system Economization The following qualitative analysis parameters will be discussed for each of the cooling architectures: Highlights Advantages and disadvantages Future outlook Each cooling architecture s advantages and disadvantages are presented in a table that contains the following five key parameters: Current usage and availability (IT/data center type, new versus retrofit, climate, and location) Energy efficiency (as compared to open baseline configuration and power usage effectiveness [PUE ]) Reliability Equipment (TCO and capacity range) Standardization and acceptance (standards compliance, sales growth trend, and installed base) The discussion below is strictly qualitative; any quantitative discussion is beyond the scope of this document. As a follow-on to this publication, The Green Grid may develop a quantitative data center cooling architecture white paper that provides further numerical evidence to support the distinctions
5 PAGE 5 offered here. A common question in the discussion of cooling architectures is that of air cooling versus liquid cooling. In practice, data centers generally use not one or the other but a combination of both. Consider that for the majority of cases the individual IT components are air-cooled and, somewhere in the thermal management system, a liquid will be used to remove the heat from the air through some form of heat exchanger. For example, such a heat transfer may be designed to occur at the rack level with a liquid-cooled rack or a rear-door heat exchanger or at the room level in a perimeter computer room air conditioner (CRAC). In both cases, air and a liquid are both used as a heat transfer fluid. Therefore, this paper does not include a specific discussion comparing air versus liquid cooling because the decision is generally not which coolant to use but rather where to have the air/liquid heat transfer occur. There are, however, limiting cases where this is not true. One extreme example is a full air-side economizer architecture (discussed later in this paper), where filtered outdoor air is used as the cooling medium and no liquid is used at all. The other end of the spectrum is deploying liquid cooling all the way into the IT equipment, where the components are liquid-cooled themselves. Often the incorrect assumption is made that if the CPU is liquid-cooled, then the cooling problem is solved, but in most typical servers, the CPU only represents somewhere between 25% and 40% of the server heat generation. So if liquid cooling within the IT equipment only cools the CPU, the remainder of the load must still be cooled, often through standard air cooling. Fully liquid-cooled systems can be very efficient if fully integrated into the building s cooling system, but their cost and complexity limit them to specialty applications and are outside of the scope of this paper. As IT equipment s level of manageability and controls implementation increases, another important topic will be the dynamic nature of IT and rack-level power consumption, workload placement, and specific localized cooling requirements. This is best demonstrated with an example. Consider a typical legacy, medium-density data center that has added server virtualization. Rather than low and distributed power consumption, it now will be much more dynamic. Instead of the vast majority of servers ramping between idle and some low utilization, the server pool may experience much higher levels of variation. Plus, server idle power is dropping rapidly as manufacturers have added focus on the low end of the power/operations scale. In the extreme case, the workload that had been lightly distributed across the floor may now be concentrated in a smaller number of virtualized servers running at much higher utilization and higher heat densities, with other servers (or racks of servers) in a sleep state. In addition, depending on the specifics of the workload scheduler, that high-density
6 PAGE 6 workload could routinely move across the data center. The challenge for thermal management will be addressing/supporting the localized hot spots while not wasting cooling on the racks that are in a very low idle or sleep state. The issue affects all of the cooling architectures discussed below; the main point is to consider the dynamic nature of the load when selecting cooling architectures. II. Assumptions The intended audiences for this publication are IT professionals, facility engineers, and CxOs who have a basic working knowledge of popular cooling architectures. The paper s content is meant to be used in the initial evaluation of data center cooling equipment and focuses on the cooling delivery path to the IT equipment. It does not include cooling loads outside of the data center (e.g., the remainder of the building). III. Definitions Blanking panel. A metal or plastic plate mounted on unused IT equipment spaces in a cabinet to restrict bypass airflow. It can also be called a filler plate. Bypass. Cooled airflow returning to air conditioning units without passing through IT equipment. Cabinet. An enclosure for housing IT equipment, a cabinet is also sometimes referred to as a rack. It must be properly configured in order to contain and separate supply and return airflow streams. Computational fluid dynamics (CFD). A scientific software tool used in the analysis of data center airflow scenarios. Close-coupled cooling. Cooling architecture that is installed adjacent to server racks. It reduces the airflow distance and minimizes the mixing of supply and return air. Computer room air conditioner (CRAC). Unit that uses a compressor to mechanically cool air. Computer room air handler (CRAH). Unit that uses chilled water to cool air. Delta T. Delta temperature, the difference between the inlet (return) and outlet (supply) air temperatures of air conditioning equipment. Direct expansion (DX) unitary system. A refrigeration system where the evaporator is in direct contact with the air stream, so the cooling coil of the air-side loop is also the evaporator of the refrigeration loop. Economizer. Cooling technologies that take advantage of favorable outdoor conditions to provide partial or full cooling without energy use of a refrigeration cycle. Economizers are divided into two fundamental categories: Air-side systems. Systems that may use direct fresh air blown into the data center with hot air extracted and discharged back outdoors, or they may use an air-to-air heat exchanger. With the air-to-air heat exchanger, cooler outdoor air is used to partially or fully cool the interior data center
7 PAGE 7 air. Air-side systems may be enhanced with either direct or indirect evaporative cooling, extending their operating range. Water-side systems. These systems remove heat from the chilled water loop by a heat exchange process with outdoor air. Typically, there may be a heat exchanger that is piped in series with the chilled water return and chiller as well as and piped either in series or in parallel with the cooling tower and chiller condenser water circuit. When the cooling tower water loop is cooler than the return chilled water, it is used to partially or fully cool the chilled water, thus reducing or eliminating demand on the chiller refrigeration cycle. Installed base. The number of commercial installations in operation. Kilowatts (kw). A measurement of cooling capacity. (3.516 kilowatts = one ton) Packaged DX system. This system features components of the DX unitary system refrigeration loop (evaporator, compressor, condenser, expansion device, and even some unit controls) that are factory assembled, tested, and packaged together. Power usage effectiveness (PUE). A measure of data center energy efficiency calculated by dividing the total data center energy consumption by the energy consumption of the IT computing equipment. Rack. A metal structure consisting of one or more pairs of vertical mounting rails for securing and organizing IT equipment, a rack is also sometimes referred to as a cabinet. Racks are more often open structures without sheet metal sides or a top, whereas cabinets feature partially or fully enclosed sides and top to improve security and airflow characteristics. Recirculation (or air short-circuiting). Hot return air that is allowed to mix with cold supply air, resulting in inefficient diluted warm air entering IT equipment. Retrofit. The process of upgrading an existing system s performance, a retrofit is also known as a brown field installation. This is the opposite of a new (green field) installation. Return air. The heated air returning to air conditioning equipment. Ride through. A term to describe the time required for the secondary/backup cooling system to fully replace the primary cooling system in handling cooling demands. Sales growth trend. A sales trend that is measured at regular intervals, typically month-to-month or year-to-year, to indicate increasing, flat, or declining sales revenue trends. Sensible cooling. The removal of heat that causes a change in temperature without any change in moisture content. Standards compliance. A system s ability to comply with recognized government and industry best practices and minimum requirements. Supply air. The cooled airflow discharged from air conditioning equipment. Total cost of ownership (TCO). A financial decision-making analysis tool that captures the total installed and operating costs.
8 PAGE 8 IV. Airflow Management Strategies STRATEGY 1A: OPEN AIRFLOW MANAGEMENT Highlights: This traditional strategy consists of cabinets whose arrangement has not been optimized for cooling. The cabinets have been placed within a room where no intentional airflow management is deployed to reduce bypass and recirculation between supply and return airflows. The significant mixing of cold and hot air streams results in the least efficient of all cooling strategies. This is the traditional configuration in use today and will be the basis for all other comparisons. This configuration may be acceptable for existing, small, and/or low heat density data centers where other configurations may not be cost justifiable. Open airflow management is not recommended for new or retrofit installations. Table 1. Open airflow management advantages and disadvantages Current usage and availability Efficiency Advantages This is the legacy configuration still widely used today. This configuration does not incorporate any airflow management methodology, so there are no related equipment availability concerns. Installation costs are below average because no additional equipment is installed to manage and separate intake and exhaust airflows. There may be a low first-time cost for adding load to available white space. Disadvantages Open airflow management presents usage concerns related to the high cost of maintenance and operation. This configuration is not optimized for efficiency. Multiple uncontrolled mixing of supply and return airflow paths results in it being the least efficient cooling strategy. Studies show that most cooling configurations of this type are significantly over-deployed to overcome their inefficiencies. Adding redundant equipment to increase capacity is very expensive and does not
9 PAGE 9 Reliability Equipment architectures Standardization and acceptance Extensive industry experience has found that equipment functions reliably and maintenance is consistent with other strategies. The inefficiency of this approach does provide some inherent redundancy. This strategy may be acceptable for small, low-density (<3 kw per cabinet) applications. Because this is the traditional lowdensity architecture, by default it is the standard baseline configuration. guarantee effectiveness. This strategy is very susceptible to hot spots. Open airflow management is problematic with mixed heat loads. The amount and relative size of the cooling equipment in this configuration increases TCO and requires more floor space. As knowledge grows within the user community, this configuration is becoming unacceptable for all retrofit and new installations. Future Outlook: The vast majority of existing data centers built prior to 2004 use the open airflow management strategy, which results in underutilized cooling capacity, inadequate redundancy, and the worst energy efficiency of any cooling architecture. As more organizations become aware of the significant energy savings that can be gained by employing any of the other strategies described in this paper, it is unlikely that this strategy will continue to be used. This configuration may be acceptable for existing, small, rapidly changing, and/or low heat density data centers where other configurations are not cost justifiable, but this architecture should be avoided for new designs.
10 PAGE 10 STRATEGY 1B: PARTIALLY CONTAINED AIRFLOW MANAGEMENT (E.G., HOT AISLE/COLD AISLE) Highlights: This airflow management strategy consists of some intentional partially contained airflow management comprised of any one of a combination of airflow management techniques. However, no complete segregation, isolation, or containment between either source and/or return airflows exists in this configuration. Most commonly deployed is the hot aisle/cold aisle methodology, but partially contained airflow management also consists of one or more of the following components: return air (plenum) ceilings, supply air raised floors, patch panels, cable ingress grommets, enclosed cabinets and barriers, and internal configuration of cabinets, including blanking panels, air dams, and cable management to minimize airflow obstructions. This configuration has a moderate TCO, is cost-effective to install and maintain, can be used for new or retrofit applications in low- to medium-density (<10 kw per cabinet) environments, and is widely used in many data centers. This configuration is commonly used in conjunction with perimeter room cooling, where other infrastructure considerations have a significant impact on energy efficiency and should be given serious consideration, including room density, size and layout along with proper quantity, location, and sizing of perimeter CRAC/CRAH units, ducting, and vents.
11 PAGE 11 Table 2. Partially contained airflow management advantages and disadvantages Current usage and availability Efficiency Reliability Equipment architectures Standardization and acceptance Advantages This configuration is commonly deployed globally. This configuration is possible without any major changes to infrastructure. Components are readily available from many suppliers. It is relatively easy to upgrade/retrofit. Some combination of partially contained airflow management components should be used in all new construction. Partially contained airflow management offers improved efficiencies over an open configuration. This strategy offers a moderate TCO. This configuration has no active components, resulting in a low probability of failures. The equipment is common and has a large installed base. Many manufacturers make equipment for this configuration. Both procurement and installation are cost-effective. Many combinations of components can be used to achieve cost and efficiency objectives. Many facility managers, consultants, and IT operators are familiar with this strategy. Disadvantages Unless this configuration is consistently and thoroughly implemented throughout the data center, results will vary dramatically. Some implementation issues may exist due to interference between return ducting and cable trays or other existing infrastructure. Because the cooling is not fully efficient, this configuration employs extra cooling units to overcome mixing, thereby resulting in lower-than-optimum efficiency. As heat density increases, a cooling density limitation may occur. CFD analysis may be required to achieve higher equipment densities and to optimize efficiency. As heat loads increase, the data center is more susceptible to hot spots. Airflow may be limited by perforated access floor tiles. It can be difficult to reconfigure the cooling system to accommodate room or IT load changes. In barrier-contained configurations, the return side area can exceed 100 F, causing potential operator safety concerns.
12 PAGE 12 Future Outlook: Many existing data centers use this strategy and, depending upon the amount of recirculation and/or bypass, will suffer from capacity limitations, inadequate redundancy, and energy inefficiencies. Because it is so commonly used, not too complex, modular, easy, and cost-effective to implement and maintain, this configuration is likely to remain popular well into the future. Many equipment manufacturers and industry groups are striving to increase efficiencies and educate data center owners/operators on ways to use these products as efficiently as possible; however, as cooling densities increase above 10 kw per cabinet, other strategies will have to be deployed. STRATEGY 1C: CONTAINED AIRFLOW MANAGEMENT Highlights: Containment is achieved by using any one airflow management technique or a combination of them that results in complete isolation and segregation between the cooling source supply and the heatgenerating equipment return airflows. A contained airflow management configuration consists of a combination of one or more of the following: chimney cabinets ducted to return air plenum spaces, raised-floor supply ducted underneath enclosed cabinets, overhead ducting to provide cool air/remove warm air, cold- and hotaisle barriers, cable ingress grommets, and internal configuration of cabinets, including blanking panels, air dams, and cable management to minimize airflow obstructions. This passive containment strategy, when used with popular active architectures, can result in extreme energy efficiency because without recirculation and bypass, 100% of the supply air is provided as intake for the IT equipment. This architecture is cost-effective to install and maintain, resulting in a low TCO, and it can be used for new or retrofit applications. When properly controlled, this architecture is extremely robust, and it can handle variations from rack to rack, as well as poor room or under-floor airflow distribution and transients. The IT equipment will generally take the air it needs from the cold containment or cold room (and exhaust it into the hotaisle containment). Containment schemes are ideal for supporting the increasingly dynamic workload challenges found in today s virtualized data centers. Cold-aisle containment can be fed from above or from underneath the floor. Hot-aisle containment and/or chimney cabinets typically direct the hot air to local cooling or a ducted return overhead. Both
13 PAGE 13 cold-aisle containment and hot-aisle containment/chimney cabinets have advantages. The specific nature of the subject data center will generally dictate which choice is right. Cold-aisle advantages include: Simplest implementation in a retrofit, particularly in a typical raised-floor data center with hot aisle/cold aisle Easier to ensure a consistent IT equipment inlet temperature Somewhat simpler integration with outdoor air economizer schemes Quicker response in the event of a fire (prevention/gas suppression system operation) Hot-aisle advantages include: More of the data center s open space is at a more tolerable working temperature Larger cold air thermal mass provided for a greater buffer/longer ride-through time during a cooling system upset Somewhat more efficient in practice (with less than 100% isolation and some leakage cold to hot side) (Note that the efficiency difference between the two is small compared to the significant difference between containment and other cooling airflow strategies such as hot aisle/cold aisle. The best path is to choose the containment method that fits best with your existing or planned infrastructure.) This architecture is capable of cooling heat load densities in excess of 30 kw per cabinet. A contained configuration may enable airflow management without utilizing the raised-floor plenum for air distribution, potentially lowering facility costs by eliminating the raised-floor architecture. Table 3. Contained airflow management advantages and disadvantages Current usage and availability Efficiency Advantages This strategy is in common practice today. Many manufacturers offer either contained supply or return cabinets. Isolating the airflow paths minimizes the cooling system losses, achieving the highest level of airflow efficiency. Containment allows less mixing, higher set points, more cooling capacity, higher densities, and higher A/C Delta T efficiencies. Disadvantages The perception is that this strategy only works for low to medium densities. Barriers generally have to be custom built in order to fit a particular room. This configuration requires more planned system engineering. The CRAC/CRAH Delta T between supply and return air may need to be managed at extreme densities and at high utilization levels. Other infrastructure
14 PAGE 14 Reliability Equipment architectures Standardization and acceptance Higher return temperatures allow for higher evaporator temperatures and higher compressor coefficient of performance (COP). This strategy has a low installation and maintenance TCO. It eliminates perforated access floor tile airflow limitations, depending on the architecture. When contained airflow management is used with economization, more free cooling hours are available. Contained airflow management minimizes temperature variance across server inlets, allowing room supply temperature to be increased. Using non-fan-assisted return airflow ducting results in the highest reliability of all strategies. Passive ducted chimney (return air) cabinets can be used. Fan-assisted ducted (return air) chimney cabinets can be used. Supply air can be contained and delivered via raised floor through openings in cabinets bases. Physical barriers such as curtains and solid panels containing cold aisles can be used. Physical barriers such as curtains and solid panels containing hot aisles can be used. This configuration eliminates the need for CFD airflow analysis. Row-based cooling solutions can be used with either hot- or coldaisle barriers. This configuration complies with all current standards. There is growing acceptance of this configuration by the industry. Contained airflow management requires no maintenance or user considerations such as room density, size and layout, and proper quantity, location, and sizing of air conditioning units, ducting, and vents affect energy efficiency. Ducted systems employing fans add a potential point of failure, decreasing reliability consistent with other fan-powered strategies. Fire sprinkler systems may need to be relocated to avoid obstruction from chimney cabinets and barriers. Chimney cabinets require deeper footprints to create rear air ducts. Barriers typically need to be customized to fit unique spaces. Barriers may be viewed as obtrusive and unattractive. Containing cold supply (versus hot return) air reduces the thermal cold air mass available during ride-through conditions. In barrier-contained configurations, the return side
15 PAGE 15 training. area can exceed 100 F. The payback period may be longer due to higher capital expense requirements, as compared with a partially contained solution. If used without raised floor, CAPEX may be lower. Future Outlook: A fully contained passive cooling strategy, when used in conjunction with any active cooling systems mentioned in this paper, allows those systems to operate at a very high efficiency level because all of the cooled supply air is being delivered to the intake of the IT equipment. The potential efficiency gains, high reliability, and low install and maintenance TCO make this architecture comparable to the best of other architectures. Contained airflow management is a newer strategy, is currently used in a number of installations globally, is readily available from a number of leading manufacturers, and will continue to be an economical choice for many cooling applications for the foreseeable future. V. Equipment Placement Strategies STRATEGY 2A: COOLING EQUIPMENT CABINET Highlights: Cabinet, in-rack, or closed-loop cooling equipment is wholly contained within a cabinet or is immediately adjacent to and dedicated to a cabinet. Cabinet architecture provides high-density spot cooling by augmenting and increasing capacity/redundancy of room-based perimeter cooling systems. It features shorter, more predictable airflow paths, allowing high utilization of rated air conditioner capacity. It also augments CRAC fan power requirements, which increases efficiency, depending upon the room s architecture.
16 PAGE 16 This strategy allows for variable cooling capacity and redundancy by cabinet. Because airflow distances are shortest and well-defined with this strategy, airflows are not affected by room constraints or installation variables, resulting in high air conditioning capacity utilization. This architecture handles power or equipment densities of up to 50 kw per cabinet. Table 4. Cabinet cooling equipment advantages and disadvantages Current usage and availability Efficiency Reliability Equipment architectures Advantages This configuration is readily available, with equipment from a number of leading manufacturers. The system provides a flexible, modular deployment for use at the cabinet level. Cabinet cooling equipment closely couples heat removal with the heat-generation source to minimize or eliminate mixing. The airflow is completely contained within the cabinet. Risk to IT equipment from a single cooling failure is localized based upon the many-to-many architecture. Modular and scalable design allows equipment density to vary in size, up to cooling capacity. This architecture is immune to room effects and reconfiguration; Disadvantages This configuration may be limited to applications where there is ultra-high density by specific cabinet with low- to mediumdensity room-cooling capacity. This configuration is not as efficient in large-scale deployments when compared with other strategies. Cabinet cooling equipment requires additional rack-level electrical and cooling infrastructure. This configuration may result in stranded capacity when the cooling resource is used by single rack. Many rack cooling systems are not capable of redundancy, thus limiting resiliency. As with any active cooling, there is the possibility of liquid leaks. In-cabinet cooling requires additional piping, which creates additional potential leakage points. Due to the increased number of air conditioner units and related fans, reliability will be compromised if the system is not properly maintained. Redundancy requires a CRAC/CRAH system. Each cabinet requires a dedicated air conditioning unit along with associated chilled water or refrigerant piping, resulting in high total costs for
17 PAGE 17 Standardization and acceptance rack layout can be completely arbitrary. Heat is captured at the rear of the rack, minimizing mixing with cool air. This configuration is capable of handling extreme densities. The architecture provides the flexibility to locate IT equipment in rooms and spaces that were not intended for such equipment. If sized such that high-density cooling can be applied only where needed, this architecture could have a lower TCO than those that provide high-density cooling everywhere (including areas where it s not necessary). Standardized solutions can be operated with minimal user training. installation and maintenance when compared with other strategies. There have been some concerns over increased potential for chilled water or refrigerant leakage. The cooling capacity cannot be shared with other cabinets. Future Outlook: Cabinet, in-rack, or closed-loop cooling equipment design allows for dedicated, highly contained cooling at the individual cabinet level. This type of architecture allows for extremely high heat density cooling of IT equipment, independent of the ambient room conditions. As a result, it provides the greatest flexibility for cabinet locations. The modular, rack-oriented architecture is the most flexible, is fast to implement, and achieves extreme cooling densities, but at the cost of efficiency. Cabinet cooling is a newer architecture that is currently used in a number of installations. It is readily available from multiple leading manufacturers and will continue to be the popular choice for most extreme cooling density applications for the foreseeable future. STRATEGY 2B: COOLING EQUIPMENT ROW Highlights: Row-based cooling is an air distribution approach in which the air conditioners are dedicated to a specific row of cabinets.
18 PAGE 18 Configurations consist of row-based air conditioners installed between cabinets or rear-door and overhead heat exchangers. This strategy features shorter, more predictable airflow paths with less mixing, which allows high utilization of rated air conditioner capacity. It also allows for variable cooling capacity and redundancy by row. This architecture uses either refrigerant or chilled water cooling coils. Both require remote heat rejection. It provides high-density spot cooling by augmenting and increasing capacity/redundancy of noncontained, room-based perimeter cooling systems, thus increasing efficiency. Row-oriented architecture handles heat densities of up to 30 kw per cabinet. Table 5. Row-based cooling equipment advantages and disadvantages Current usage and availability Efficiency Reliability Advantages This strategy is widely used, particularly in medium-sized and high-density applications, and it is available from a number of leading manufacturers. Airflow paths are shorter than comparable room configurations, which improves efficiencies. Significant efficiency improvements have been shown, especially when deployed in highrecirculation/low-containment environments. When used with a containment strategy, even greater efficiency can be achieved. Row-based cooling has proven to be as reliable as other active cooling architectures. Additional row air conditioning units can be added as needed to meet resiliency requirements. Disadvantages This strategy may not be ideal for very small, low-density server spaces or the very largest deployments. This configuration has shown diminishing efficiency returns for larger, higher-resiliency or lowerdensity applications. Most row-based cooling systems, by their nature, minimize mixing. However, airflows may not be completely contained and independent of the room environment, and the extent of mixing of row and room air streams will reduce efficiency. This strategy requires additional row-level electrical and cooling infrastructure. When compared with room architecture, row cooling requires additional piping, creating additional potential leakage points. The increased number of air conditioner units and related fans, compressors, pumps, valves, controls, etc. creates more potential failure points. Redundancy requires a CRAC/CRAH system.
19 PAGE 19 Equipment architectures Standardization and acceptance The system provides a flexible, modular deployment that is easily scaled with equipment of various sizes. Organizations can add cooling capacity to their existing systems. Cooling capacity is well-defined and can be shared with the row. This strategy is excellent for retrofitting existing installations. Overhead heat exchangers can be deployed over any manufacturer s cabinets. This architecture has wide acceptance and significant global deployment. Standardized solutions can be operated with minimal user training. Row-based configurations require additional floor space so rows can accommodate air conditioner units. This increases compute costs per square foot, which is an important consideration in most data centers. This strategy is not as flexible where heat loads vary from one end of the floor or row to another. Row-based configurations can require proprietary cabinets. Rear-door exchangers must be designed to match host cabinets. Future Outlook: Row-based cooling is an ideal solution for high-density configurations where additional capacity is needed to eliminate hot spots while improving overall energy efficiency in open or partial airflow management environments. It does so by bringing the heat transfer closer to the IT equipment source near the cabinet. Moving the air conditioner closer to the cabinet ensures a more precise delivery of supply air and a more immediate capture of exhaust air. The modular, row-oriented architecture provides many of the flexibility, speed, and density advantages of the rack-oriented approach, but with a cost similar to room-oriented architecture. Row cooling is a newer architecture, is commonly used today, is readily available from a number of leading manufacturers, and will continue to be a popular choice for most applications for the foreseeable future. STRATEGY 2C: COOLING EQUIPMENT PERIMETER
20 PAGE 20 Highlights: Perimeter or room-oriented architecture consists of one or more air conditioners placed around the perimeter of the data center to supply cool air via a system of plenums, ducts, dampers, vents, etc. This architecture must be designed and built into a building s infrastructure. Additional efficiencies are possible when used in conjunction with partial or full-containment airflow strategies such as hot/cold aisles with raised floors to distribute air and drop-ceiling plenums for return air. A CRAC contains an internal compressor, fans, and a coil and uses the direct expansion of refrigerant to remove heat whereas CRAHs contain only fans and a cooling coil and typically use chilled water to remove heat. Both CRACs and CRAHs utilize outdoor heat rejection sources; CRACs use remote condensers, while CRAHs traditionally use chiller systems. Table 6. Perimeter cooling equipment advantages and disadvantages Current usage and availability Efficiency Reliability Equipment architectures Advantages This configuration is widely accepted and is available from many well-known suppliers. In partial containment environments, floor tiles can be quickly and easily relocated to optimize cool air distribution. Very efficient high heat densities are possible when used in conjunction with robust containment and economization. Perimeter cooling units have been in use for a long period of time and have proven dependable. This configuration promotes an efficient use of computer room floor white space because cabinets can be placed side-byside in long rows. The physical layout of CRACs or CRAHs can vary. The perimeter configuration allows for cabinet equipment to be Disadvantages This configuration may not be ideal for small or very high density environments. High heat densities require minimization of recirculation or bypass air or full containment. Unless a plenum solution is used, the heat return is not dispersed evenly. Sub-floor obstructions can affect static pressure. This configuration is more susceptible to recirculation and bypass than row or cabinet strategies. As with any active cooling, there is the potential for liquid leakage. This configuration requires more upfront design engineering to ensure that future capacity requirements are met. Room constraints, such as CRAC/CRAH location, room shape, ceiling height, under-floor obstructions, etc., affect airflow and therefore overall efficiency.
21 PAGE 21 Standardization and acceptance positioned side-by-side, rather than broken up with cooling units in the IT rack space, as is the case with some in-row cooling configurations. This architecture is applicable for retrofit or new installations. This is the current standard in most data centers around the world. Redundancy schemes need careful review to assure that all failure modes can really support the data center environment. This is not as scalable as other architectures for higher-density applications. Supplemental techniques, such as containment or row-based cooling, are routinely used to augment perimeter cooling. Future Outlook: Perimeter or room-oriented architecture is the traditional methodology for cooling data centers. It can offer a well-tested and cost-effective means of providing high efficiency and reliability, especially when deployed in medium-sized applications in conjunction with a high-containment strategy. Even higher efficiencies are possible when this architecture is used in conjunction with air or chilled water economization. There are a number of leading manufacturers offering highly efficient designs that feature high Delta Ts and variable speed fans, and these designs will continue to be popular for the foreseeable future. STRATEGY 2D: COOLING EQUIPMENT ROOFTOP/BUILDING EXTERIOR Highlights: This strategy utilizes the building s central air-handling units to cool the data center. The cooling equipment located outside the computer room typically consists of roof chillers and towers associated with the central plant, and it could very well support cooling equipment within the white space. Equipment can also be installed at ground level and ducted through walls.
22 PAGE 22 This strategy represents the most efficient use of computer room floor white space because the cooling equipment is located on the roof or exterior land area, allowing cabinets to be placed side-byside in long rows. It can be used in conjunction with one of the previously described airflow management strategies, affecting the data center s energy efficiency accordingly. As with room-oriented architecture, the data center is supplying cool air via a system of plenums, ducts, dampers, vents, etc. This architecture must be designed and built into a building s infrastructure. Table 7. Rooftop/building exterior cooling equipment advantages and disadvantages Current usage and availability Efficiency Reliability Equipment architectures Advantages This is the traditional approach for building HVAC systems and is therefore widely available from many suppliers. This architecture has been widely and effectively adapted for extremely large data center applications. Exterior systems can be very efficient, especially when used with robust containment and economization. Larger fans are typically more efficient. The reliability of rooftop/exterior cooling equipment is similar to that of other liquid cooling offerings on the market today. This strategy reduces data center requirements for floor space. This configuration is cost-effective; roof space costs less per square foot than interior space. For new applications, this strategy Disadvantages Roof penetrations create potential for water leaks into the data center. Exterior cooling units occupy land area, requiring additional facility costs. This configuration is not costeffective for smaller applications. Without full containment, hot spots can be problematic. Organizations can incur increased energy costs for fans to move air from roof through ducts to the data center. As with any active cooling, there is the possibility of liquid leaks. Redundancy is a greater challenge because of the larger capacity of the exterior units and the ductwork/dampers necessary to provide overlapping cooling in the data center. Units are exposed to outdoor elements, potentially reducing life expectancy. This strategy requires more upfront design engineering to ensure that future capacity requirements will be met. This configuration is less flexible than other architectures.
23 PAGE 23 Standardization and acceptance must be designed into the building s infrastructure. This architecture is common, available globally, and widely accepted. Extended pipe and duct runs are required to connect to non-selfcontained HVAC systems. Additional structural work on the building may be required to support this architecture. This is not cost-effective for retrofit applications. Future Outlook: Rooftop architecture is the traditional methodology for cooling buildings and has been well-adapted to cooling data centers. It can offer a well-tested and cost-effective means of providing high efficiency and reliability, especially when designed into medium- to large-scale applications in conjunction with a robust containment strategy. Even higher efficiencies are possible when this approach is used in conjunction with air or chilled water economization. There are a number of leading manufacturers offering highly efficient designs that feature high Delta-Ts and variable speed fans, and these designs will continue to be popular for the foreseeable future for new installations. VI. Heat Rejection Strategies STRATEGY 3A: HEAT REJECTION CHILLED WATER SYSTEM Highlights: A chilled water heat rejection system uses chilled water rather than refrigerant to transport heat energy between the air handlers, chillers, and the outdoor heat exchanger (typically a cooling tower in North America or a dry cooler in Europe). A chilled water heat rejection system uses either water or a water/glycol solution to take heat away from air handlers serving the data center. This fluid may then be cooled by a chiller using mechanical refrigeration, heat exchange with a cooling tower water loop, or dry coolers operating in conjunction with air-cooled chillers. Chilled water cooled HVAC systems are inherently more efficient, cooling larger amounts of heat as typically found in large data centers, assuming proper airflow management is used. The components of the chiller (evaporator, compressor, air- or water-cooled condenser, and expansion device) often come pre-installed from the factory, reducing field labor and installation time and improving reliability.
24 PAGE 24 Because they are installed in a central location, chilled water systems feature centralized maintenance and improved control characteristics, through the minimization of leak potential and the simplification of containment, compared with equipment located on the roof. Chilled water systems offer reduced TCO from lower installed costs for larger systems over 300 tons (1055 kw) and lower operating costs when using a cooling tower or an evaporative condenser. Water-cooled units make less noise, offer more cooling per square foot, and, depending upon their environment, usually require a little less routine maintenance than Strategies 3B and 3C described below. Depending upon the geographic location, which affects average ambient temperature and humidity conditions based on annual statistical weather variation data, both chilled water and direct expansion architectures may achieve significant additional energy savings by taking advantage of air or water economization. Table 8. Chilled water system advantages and disadvantages Current usage and availability Efficiency Reliability Advantages This is the traditional approach for building HVAC systems and is therefore widely available from many suppliers. This architecture has been widely and effectively adapted for use in data center applications. Proven technology provides high overall efficiency. Although this strategy generally makes for a more expensive plant to build, floor units offer more efficient cooling per square foot. This architecture is less costly to operate than other architectures, particularly as scale increases over 300 tons (1055 kw). Maintenance is approximately equivalent to other active systems. Centralized systems can provide redundancy by installing multiple chillers and pumps. Disadvantages Efficiency is dependent upon the outside year-round temperature. This configuration has many components when compared with some other systems, which may result in lower reliability. Leaks require containment. Many operators do not like IT equipment near water. Large chiller building blocks require redundant chillers and pump systems.
25 PAGE 25 Equipment architectures Standardization and acceptance The centralized design reduces maintenance and improves control. This strategy offers flexibility; adding new chilled water circuits to the existing system is a relatively simple operation, and more chillers and pumps can be added to the existing system to increase cooling capacity. This architecture is more suitable for cooling large, multi-story buildings and for very long distances along the same floor level. Chilled water configurations can operate at lower noise levels than other systems. This architecture is common, available globally, and widely accepted. Maintenance and water treatment costs are the main disadvantage of the evaporative cooling towers and/or evaporative condensers. Organizations incur higher installed costs for systems less than 300 tons (1055 kw), compared with DX systems. Due to high-volume water consumption, cities and municipalities may enact limits that result in availability concerns and increased costs. (Chillers are also available in aircooled versions, which do not use excess water.) Future Outlook: Chilled water systems are another traditional methodology for cooling buildings and have been well adapted to cooling data centers. They can offer a well-tested and cost-effective means of providing high efficiency and reliability, especially when designed into medium- to large-scale applications in conjunction with a high-containment strategy. There are a number of leading manufacturers offering high-efficiency designs, which will continue to be a popular choice for the foreseeable future for new installations. STRATEGY 3B: HEAT REJECTION DIRECT EXPANSION (DX) REFRIGERATION SYSTEM Highlights: This cooling strategy uses refrigerant as part of a direct compression/expansion cooling system. In a direct expansion unitary system, the evaporator is in direct contact with the air stream, so the cooling coil of the air-side loop is also the evaporator of the refrigeration loop. The term direct refers to the position of the evaporator with respect to the air-side loop. In DX systems, the treated air stream passes through the outside (fin side) of the evaporator coil such that it is directly cooled by the expansion of refrigerant passing through the tubes of the coil. DX systems, especially packaged DX systems, are more economical for smaller building cooling loads of less than 300 tons (1055 kw) where installation costs are lower, compared with chilled water systems, because DX requires less field labor and fewer materials to install.
26 PAGE 26 Packaged DX systems using air-cooled condensers generally take up less floor space, so they are frequently installed on a building s roof, in a small room adjacent to a data center, or along the perimeter of a data center where condenser air can be ducted in and out of the building. Depending upon the geographic location, which affects average ambient temperature and humidity conditions based on annual statistical weather variation data, both chilled water and direct expansion architectures may achieve significant additional energy savings by taking advantage of air or water economization. Table 9. Direct expansion (DX) refrigeration advantages and disadvantages Current usage and availability Efficiency Reliability Equipment architectures Advantages This technology is readily available from many major suppliers. This architecture has been widely and effectively used in data center applications. DX is generally more cost-effective for cooling smaller rooms/heat loads under 300 tons (1055 kw). This architecture requires less labor and fewer materials for installation, which reduces costs, especially with packaged systems. The energy usage of individual packaged DX units can easily be measured. DX system refrigerant leaks evaporate and therefore do not require a containment area, as is the case with chilled water systems. With multiple units, there is no single point of failure, and redundancy strategies are more efficiently applied. Maintenance is generally better than that of other architectures due to a simpler system design. New systems are generally less expensive to build for data centers that are less than 300 tons (1055 kw). Disadvantages In systems above 300 tons, optimally designed DX systems may be less efficient than optimally designed chilled water or economized systems. Distance constraints between the condenser and the air handling unit and on refrigerant piping limit DX systems to smaller buildings or rooms on a single floor, with no option for multi-story high rises. This configuration is noisier than
CURBING THE COST OF DATA CENTER COOLING Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers OVERVIEW Compare Cooling Strategies in Free- Standing and In-Building
Free Cooling in Data Centers John Speck, RCDD, DCDC JFC Solutions Why this topic Many data center projects or retrofits do not have a comprehensive analyses of systems power consumption completed in the
Unified Physical Infrastructure (UPI) Strategies for Thermal Management The Importance of Air Sealing Grommets to Improving Smart www.panduit.com WP-04 August 2008 Introduction One of the core issues affecting
Dealing with Thermal Issues in Data Center Universal Aisle Containment Daniele Tordin BICSI RCDD Technical System Engineer - Panduit Europe Daniele.Tordin@Panduit.com AGENDA Business Drivers Challenges
Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers Deploying a Vertical Exhaust System www.panduit.com WP-09 September 2009 Introduction Business management applications and rich
#112 Best Practices for Deploying the InfraStruXure InRow SC By David Roden Abstract The InfraStruXure InRow SC (ACSC100 and ACSC101) is a self-contained air conditioner for server rooms and wiring closets.
Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001 Presented by: Liberty Engineering, LLP 1609 Connecticut Avenue
Data Center Design Guide featuring Water-Side Economizer Solutions with Dynamic Economizer Cooling Presenter: Jason Koo, P.Eng Sr. Field Applications Engineer STULZ Air Technology Systems jkoo@stulz ats.com
- White Paper - Data Centre Cooling Best Practice Release 2, April 2008 Contents INTRODUCTION... 3 1. AIR FLOW LEAKAGE... 3 2. PERFORATED TILES: NUMBER AND OPENING FACTOR... 4 3. PERFORATED TILES: WITH
Data Center Rack Level Cooling Utilizing Water-Cooled, Passive Rear Door Heat Exchangers (RDHx) as a Cost Effective Alternative to CRAH Air Cooling Joshua Grimshaw Director of Engineering, Nova Corporation
Rittal Liquid Cooling Series by Herb Villa White Paper 04 Copyright 2006 All rights reserved. Rittal GmbH & Co. KG Auf dem Stützelberg D-35745 Herborn Phone +49(0)2772 / 505-0 Fax +49(0)2772/505-2319 www.rittal.de
GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY Public Sector ICT Special Working Group SERVER ROOM ENERGY EFFICIENCY This guide is one of a suite of documents that aims to provide guidance on ICT energy efficiency.
Raised Floor Data Centers Prepared for the Future Raised Floors Protect My Investment by Providing a Foundation for Current and Future Technologies. Creating the Future Proof Data Center Environment The
PART 1 - GENERAL 1.01 OVERVIEW A. This section supplements Design Guideline Element D3041 on air handling distribution with specific criteria for projects involving design of a Data Center spaces B. Refer
The Different Types of Air Conditioning Equipment for IT Environments By Tony Evans White Paper #59 Executive Summary Cooling equipment for an IT environment can be implemented in 10 basic configurations.
WHITE PAPER #41 SURVEY RESULTS: DATA CENTER ECONOMIZER USE EDITOR: Jessica Kaiser, Emerson Network Power CONTRIBUTORS: John Bean, Schneider Electric Tom Harvey, Emerson Network Power Michael Patterson,
DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences November 2011 Powered by DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no
1 Integrated Cabinets and Thermal Systems May 15, 2014 Panduit Korea Technical System Engineer Chester Ki 4/10/2014 2 Agenda Market Trends Thermal Architectures Data Centre Cabinet Systems Net-Access Cabinet
Data Center White Paper Rack Hygiene April 14, 2011 Ed Eacueo Data Center Manager Executive Summary This paper describes the concept of Rack Hygiene, which positions the rack as an airflow management device,
Business Management Magazine Winter 2008 Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc. Contrary to some beliefs, air is quite
Green field data center design 11 Jan 2010 by Shlomo Novotny Shlomo Novotny, Vice President and Chief Technology Officer, Vette Corp. explores water cooling for maximum efficiency - Part 1 Overview Data
Improving Data Center Energy Efficiency Through Environmental Optimization How Fine-Tuning Humidity, Airflows, and Temperature Dramatically Cuts Cooling Costs William Seeber Stephen Seeber Mid Atlantic
Power and Cooling for Ultra-High Density Racks and Blade Servers White Paper #46 Introduction The Problem Average rack in a typical data center is under 2 kw Dense deployment of blade servers (10-20 kw
7 Best Practices for Increasing Efficiency, Availability and Capacity XXXX XXXXXXXX Liebert North America Emerson Network Power: The global leader in enabling Business-Critical Continuity Automatic Transfer
Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009 Agenda Overview - Network Critical Physical Infrastructure Cooling issues in the Server Room
WH I TE PAPE R AisleLok Modular Containment vs. : A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings By Bruce Long, Upsite Technologies, Inc. Lars Strong, P.E., Upsite Technologies,
CABINETS: ENCLOSED THERMAL MOUNTING MANAGEMENT SYSTEMS WHITE PAPER Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers 800-834-4969 firstname.lastname@example.org www.chatsworth.com All products
Guidelines for Energy-Efficient Datacenters february 16, 2007 white paper 1 Abstract In this paper, The Green Grid provides a framework for improving the energy efficiency of both new and existing datacenters.
Autodesk Revit 2013 Autodesk BIM 360 Air, Fluid Flow, and Thermal Simulation of Data Centers with Autodesk Revit 2013 and Autodesk BIM 360 Data centers consume approximately 200 terawatt hours of energy
White Paper Data Center Containment Cooling Strategies WHITE PAPER EC9001 Geist Updated August 2010 Abstract Deployment of high density IT equipment into data center infrastructure is now a common occurrence
DATA CENTER RESOURCES WHITE PAPER IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT BY: STEVE HAMBRUCH EXECUTIVE SUMMARY Data centers have experienced explosive growth in the last decade.
Objectives Increase awareness of the types of economizers currently used for data centers Provide designers/owners with an overview of the benefits and challenges associated with each type Outline some
Improving Data Center Efficiency with Rack or Row Cooling Devices: Results of Chill-Off 2 Comparative Testing Introduction In new data center designs, capacity provisioning for ever-higher power densities
Managing Data Centre Heat Issues Victor Banuelos Field Applications Engineer Chatsworth Products, Inc. 2010 Managing Data Centre Heat Issues Thermal trends in the data centre Hot Aisle / Cold Aisle design
Data Center Energy Profiler Questions Checklist Step 1 Case Name Date Center Company State/Region County Floor Area Data Center Space Floor Area Non Data Center Space Floor Area Data Center Support Space
Analysis of data centre cooling energy efficiency An analysis of the distribution of energy overheads in the data centre and the relationship between economiser hours and chiller efficiency Liam Newcombe
How Row-based Data Center Cooling Works White Paper 208 Revision 0 by Paul Lin and Victor Avelar Executive summary Row-based data center cooling is normally regarded as a cold air supply architecture that
WHITE PAPER Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings By Lars Strong, P.E., Upsite Technologies, Inc. Kenneth G. Brill, Upsite Technologies, Inc. 505.798.0200
Data Center Power Consumption A new look at a growing problem Fact - Data center power density up 10x in the last 10 years 2.1 kw/rack (1992); 14 kw/rack (2007) Racks are not fully populated due to power/cooling
AIR-SITE GROUP White Paper Green Equipment Room Practices www.air-site.com Common practices to build a green equipment room 1 Introduction Air-Site (www.air-site.com) is a leading international provider
2 Energy Efficiency Best Practice Guide Data Centre and IT Facilities Best Practice Guide Pumping Systems Contents Medium-sized data centres energy efficiency 3 1 Introduction 4 2 The business benefits
POWER USAGE EFFECTIVENESS Developing Best Practices Manual for Indian Data Centers PUE - Power usage effectiveness is Total incoming power( IT equipment + electrical And mechanical support system ) / Presentation
A White Paper from the Experts in Business-Critical Continuity TM Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency Executive Summary Energy efficiency
Ten Steps to Solving Cooling Problems Caused by High- Density Server Deployment By Peter Hannaford White Paper #42 Revision 1 Executive Summary High-density servers present a significant cooling challenge.
GREEN FIELD DATA CENTER DESIGN WATER COOLING FOR MAXIMUM EFFICIENCY Shlomo Novotny, Vice President and Chief Technology Officer, Vette Corp. Overview Data centers are an ever growing part of our economy.
Data Center Operating Cost Savings Realized by Air Flow Management and Increased Rack Inlet Temperatures William Seeber Stephen Seeber Mid Atlantic Infrared Services, Inc. 5309 Mohican Road Bethesda, MD
DOE FEMP First Thursday Seminar Achieving Energy Efficient Data Centers with New ASHRAE Thermal Guidelines Learner Guide Core Competency Areas Addressed in the Training Energy/Sustainability Managers and
Five Strategies for Cutting Data Center Energy Costs Through Enhanced Cooling Efficiency A White Paper from the Experts in Business-Critical Continuity TM Executive Summary As electricity prices and IT
Data Centre Cooling Air Performance Metrics Sophia Flucker CEng MIMechE Ing Dr Robert Tozer MSc MBA PhD CEng MCIBSE MASHRAE Operational Intelligence Ltd. email@example.com Abstract Data centre energy consumption
Containment Solutions Improve cooling efficiency Separate hot and cold air streams Configured to meet density requirements Suitable for commercial and SCEC endorsed cabinets Available for retrofit to SRA
Green Data Centre Design A Holistic Approach Stantec Consulting Ltd. Aleks Milojkovic, P.Eng., RCDD, LEED AP Tommy Chiu, EIT, RCDD, LEED AP STANDARDS ENERGY EQUIPMENT MATERIALS EXAMPLES CONCLUSION STANDARDS
Data Center Technology: Physical Infrastructure IT Trends Affecting New Technologies and Energy Efficiency Imperatives in the Data Center Hisham Elzahhar Regional Enterprise & System Manager, Schneider
International Telecommunication Union ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU Technical Paper (13 December 2013) SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF TELECOMMUNICATION CABLES
Information Technology and Data Centers HVAC Showcase November 26, 2013 Alan Matzka, P.E., Senior Mechanical Engineer Bradford Consulting Engineers Welcome. Today s webinar is being recorded and will be
Data Center Precision Cooling: The Need For A Higher Level Of Service Expertise A White Paper from the Experts in Business-Critical Continuity Executive Summary Today s data centers are changing rapidly,
#92 Best Practices for Designing Data Centers with the InfraStruXure InRow RC By John Niemann Abstract The InfraStruXure InRow RC is designed to provide cooling at the row and rack level of a data center
RESEARCH UNDERWRITER WHITE PAPER LEAN, CLEAN & GREEN Wright Line Creating Data Center Efficiencies Using Closed-Loop Design Brent Goren, Data Center Consultant Currently 60 percent of the cool air that
Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER Lylette Macdonald, RCDD Legrand Ortronics BICSI Baltimore 2011 Agenda: Discuss passive thermal management at the Rack
Hot-Aisle vs. Cold-Aisle Containment for Data Centers White Paper 135 Revision 1 by John Niemann, Kevin Brown, and Victor Avelar > Executive summary Both hot-air and cold-air containment can improve the
Presentation Outline Common Terms / Concepts HVAC Building Blocks Plant Level Building Blocks Description / Application Data Green opportunities Selection Criteria Air Distribution Building Blocks same
Product Footprint - Heat Density Trends The New Data Center Cooling Paradigm The Tiered Approach Lennart Ståhl Amdahl, Cisco, Compaq, Cray, Dell, EMC, HP, IBM, Intel, Lucent, Motorola, Nokia, Nortel, Sun,
Data Center Components Overview Power Power Outside Transformer Takes grid power and transforms it from 113KV to 480V Utility (grid) power Supply of high voltage power to the Data Center Electrical Room
Network Management age e and Data Center Thermal Management Can Live Together Ian Seaton Chatsworth Products, Inc. Benefits of Hot Air Isolation 1. Eliminates reliance on thermostat 2. Eliminates hot spots
A White Paper from the Experts in Business-Critical Continuity TM Data Center Cooling Assessments What They Can Do for You Executive Summary Managing data centers and IT facilities is becoming increasingly
Ten Cooling Solutions to Support High- Density Server Deployment By Peter Hannaford White Paper #42 Revision 3 Executive Summary High-density servers offer a significant performance per watt benefit. However,
Intel Intelligent Power Management Intel How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions Power and cooling savings through the use of Intel
HVAC Systems: Overview Michael J. Brandemuehl, Ph.D, P.E. University of Colorado Boulder, CO, USA Overview System Description Secondary HVAC Systems Air distribution Room diffusers and air terminals Duct
Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED. Presentation Goals & Outline Power Density Where we have been- where we are now - where we are going Limitations of Air Cooling
Economizer Modes of Data Center Cooling Systems White Paper 132 Revision 0 by John Niemann John Bean Victor Avelar > Executive summary In certain climates, some cooling systems can save over 70% in annual
Case Study: Innovative Energy Efficiency Approaches in NOAA s Environmental Security Computing Center in Fairmont, West Virginia Prepared for the U.S. Department of Energy s Federal Energy Management Program
Reducing Data Center Energy Consumption By John Judge, Member ASHRAE; Jack Pouchet, Anand Ekbote, and Sachin Dixit Rising data center energy consumption and increasing energy costs have combined to elevate
Strategies for Deploying Blade Servers in Existing Data Centers By Neil Rasmussen White Paper #125 Revision 1 Executive Summary When blade servers are densely packed, they can exceed the power and cooling
Guide to Minimizing Compressor-based Cooling in Data Centers Prepared for the U.S. Department of Energy Federal Energy Management Program By: Lawrence Berkeley National Laboratory Author: William Tschudi
Airflow Simulation Solves Data Centre Cooling Problem The owner s initial design for a data centre in China utilized 40 equipment racks filled with blade servers spread out in three rows along the length
s.doty 06-2015 White Paper #10 Energy Efficiency in Computer Data Centers Computer Data Centers use a lot of electricity in a small space, commonly ten times or more energy per SF compared to a regular
Data Center Temperature Rise During a Cooling System Outage White Paper 179 Revision 1 By Paul Lin Simon Zhang Jim VanGilder > Executive summary The data center architecture and its IT load significantly
Re Engineering to a "Green" Data Center, with Measurable ROI Alan Mamane CEO and Founder Agenda Data Center Energy Trends Benchmarking Efficiency Systematic Approach to Improve Energy Efficiency Best Practices
News in Data Center Cooling Wednesday, 8th May 2013, 16:00h Benjamin Petschke, Director Export - Products Stulz GmbH News in Data Center Cooling Almost any News in Data Center Cooling is about increase
Great Lakes Data Room Case Study WeRackYourWorld.com Problem: During a warm summer period in 2008, Great Lakes experienced a network outage due to a switch failure in the network enclosure. After an equipment
WHITE PAPER Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings By Kenneth G. Brill, Upsite Technologies, Inc. Lars Strong, P.E., Upsite Technologies, Inc. 505.798.0200
Benefits of Cold Aisle Containment During Cooling Failure Introduction Data centers are mission-critical facilities that require constant operation because they are at the core of the customer-business
Energy and Cost Analysis of Rittal Corporation Liquid Cooled Package Munther Salim, Ph.D. Yury Lui, PE, CEM, LEED AP eyp mission critical facilities, 200 west adams street, suite 2750, Chicago, il 60606
The Advantages of Row and Rack- Oriented Cooling Architectures for Data Centers By Kevin Dunlap Neil Rasmussen White Paper #130 Executive Summary Room cooling is an ineffective approach for next-generation
Lesson 36 Selection Of Air Conditioning Systems Version 1 ME, IIT Kharagpur 1 The specific objectives of this chapter are to: 1. Introduction to thermal distribution systems and their functions (Section
FEDERAL ENERGY MANAGEMENT PROGRAM Data Center Airflow Management Retrofit Technology Case Study Bulletin: September 2010 Figure 1 Figure 2 Figure 1: Data center CFD model of return airflow short circuit
Your consent to our cookies if you continue to use this website.