Creating Data Center Efficiencies Using Closed-Loop Design Brent Goren, Data Center Consultant



Similar documents
Measuring Power in your Data Center

Rack Hygiene. Data Center White Paper. Executive Summary

Benefits of. Air Flow Management. Data Center

Three Steps To Perfectly Green

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Managing Data Centre Heat Issues

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

How to Meet 24 by Forever Cooling Demands of your Data Center

How To Run A Data Center Efficiently

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers

Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers

DataCenter 2020: first results for energy-optimization at existing data centers

Data center upgrade proposal. (phase one)

Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc.

Reducing Room-Level Bypass Airflow Creates Opportunities to Improve Cooling Capacity and Operating Costs

Power and Cooling for Ultra-High Density Racks and Blade Servers

Choosing Close-Coupled IT Cooling Solutions

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Thermal Optimisation in the Data Centre

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

White Paper. Data Center Containment Cooling Strategies. Abstract WHITE PAPER EC9001. Geist Updated August 2010

How Row-based Data Center Cooling Works

Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER

Data Centre Cooling Air Performance Metrics

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Selecting Rack-Mount Power Distribution Units For High-Efficiency Data Centers

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

Data Center Components Overview

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

APC APPLICATION NOTE #112

Environmental Data Center Management and Monitoring

Ten Steps to Solving Cooling Problems Caused by High- Density Server Deployment

Great Lakes Data Room Case Study

APC APPLICATION NOTE #92

Small Data / Telecommunications Room on Slab Floor

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

AIR-SITE GROUP. White Paper. Green Equipment Room Practices

THE GREEN DATA CENTER

GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY. Public Sector ICT Special Working Group

Specialty Environment Design Mission Critical Facilities

Rittal Liquid Cooling Series

Power and Cooling Guidelines for Deploying IT in Colocation Data Centers

- White Paper - Data Centre Cooling. Best Practice

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Center Thermal Management Can Live Together

A Comparative Study of Various High Density Data Center Cooling Technologies. A Thesis Presented. Kwok Wu. The Graduate School

Raised Floor Data Centers Prepared for the Future

IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT

DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences

Data Center Power Consumption

Airflow Management Solutions

Data Center Rack Level Cooling Utilizing Water-Cooled, Passive Rear Door Heat Exchangers (RDHx) as a Cost Effective Alternative to CRAH Air Cooling

Data Center 2020: Delivering high density in the Data Center; efficiently and reliably

Layer Zero. for the data center

Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers

Green Data Centre Design

Cooling Audit for Identifying Potential Cooling Problems in Data Centers

Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED.

How To Cool A Data Center

Data Center Cooling: Fend Off The Phantom Meltdown Of Mass Destruction. 670 Deer Road n Cherry Hill, NJ n n

IT White Paper MANAGING EXTREME HEAT: COOLING STRATEGIES FOR HIGH-DENSITY SYSTEMS

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001

Overview & Design of Data Center Cabinets

Driving Data Center Efficiency Through the Adoption of Best Practices

Using Simulation to Improve Data Center Efficiency

2006 APC corporation. Cooling Solutions and Selling Strategies for Wiring Closets and Small IT Rooms

Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Improving Data Center Energy Efficiency Through Environmental Optimization

Element D Services Heating, Ventilating, and Air Conditioning

Reducing the Annual Cost of a Telecommunications Data Center

Rack Blanking Panels To Fill or Not to Fill

Blade Servers and Beyond: Adaptive Cooling for the Next Generation of IT Systems. A White Paper from the Experts in Business-Critical Continuity

How To Improve Energy Efficiency Through Raising Inlet Temperatures

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers

Data Center Technology: Physical Infrastructure

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

DATA CENTER ASSESSMENT SERVICES

Data Center Energy Profiler Questions Checklist

Data Center Equipment Power Trends

Challenges In Intelligent Management Of Power And Cooling Towards Sustainable Data Centre

Sealing Gaps Under IT Racks: CFD Analysis Reveals Significant Savings Potential

Enclosure and Airflow Management Solution

HIGHLY EFFICIENT COOLING FOR YOUR DATA CENTRE

The Efficient Enterprise. Juan Carlos Londoño Data Center Projects Engineer APC by Schneider Electric

Optimum Climate Control For Datacenter - Case Study. T. Prabu March 17 th 2009

BICSInews. plus. july/august Industrial-grade Infrastructure. + I Say Bonding, You Say Grounding + Data Center Containment.

2010 Best Practices. for the EU Code of Conduct on Data Centres. Version Release Public Review Public

WHITE PAPER. Creating the Green Data Center. Simple Measures to Reduce Energy Consumption

Airflow Simulation Solves Data Centre Cooling Problem

GREEN FIELD DATA CENTER DESIGN WATER COOLING FOR MAXIMUM EFFICIENCY. Shlomo Novotny, Vice President and Chief Technology Officer, Vette Corp.

abstract about the GREEn GRiD

Data Center Design Guide featuring Water-Side Economizer Solutions. with Dynamic Economizer Cooling

In Row Cooling Options for High Density IT Applications

International Telecommunication Union SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF TELECOMMUNICATION CABLES IN PUBLIC NETWORKS

COMPARISON OF EFFICIENCY AND COSTS BETWEEN THREE CONCEPTS FOR DATA-CENTRE HEAT DISSIPATION

DOE FEMP First Thursday Seminar. Learner Guide. Core Competency Areas Addressed in the Training. Energy/Sustainability Managers and Facility Managers

Transcription:

RESEARCH UNDERWRITER WHITE PAPER LEAN, CLEAN & GREEN Wright Line Creating Data Center Efficiencies Using Closed-Loop Design Brent Goren, Data Center Consultant Currently 60 percent of the cool air that is supplied from air-conditioning units in a typical data center is wasted. This whitepaper provides information to help achieve greater efficiencies within the data center by optimizing the physical cooling capacity, while maintaining expected levels of reliability. Publisher s Note: This co branded and copyrighted paper is published for inclusion in formal compilation of papers, presentations, and proceedings known as The Path Forward v4.0 Revolutionizing Data Center Efficiency of the annual Uptime Institute s Green Enterprise IT Symposium, April 13 16, 2009, New York City. It contains significant information value for the data center professional community which the Institute serves. Although written by a research underwriting partner of the Institute, the Institute maintains a vendor neutral policy and does not endorse any opinion, position, product or service mentioned in this paper. Readers are encouraged to contact this paper s author(s) directly with comments, questions or for any further information.

Data center trends have traditionally been focused on delivery of service and reliability. However, there has been a recent shift in focus to provide greater efficiency in data centers. Up until now, there has been little incentive for data center managers to optimize the efficiency of their data center and they are still primarily concerned about capital costs related to their data center s capacity and reliability. A study by research analyst firm IDC shows that for every dollar of new server spend in 2005, 48 cents was spent on power and cooling. This is a sharp increase from 2000, when the ratio was 21 cents per dollar of server spend. This ratio is expected to increase even further. Thus the immediate demand to create more efficient data centers will be at the forefront of most company s cost-saving initiatives. However, efficiency gains must be balanced to ensure there is no compromise in data center reliability and performance. Legacy Data Center Design Issues A legacy data center typically has the following characteristics: A open system that delivers cold air at about 55 F via overhead ducting or a raised-floor plenum Perforated tiles (in a raised-floor environment) used to channel the cold air from beneath the raised-floor plenum into the data center Rows of racks orientated 180 degrees from alternate rows to create hot and cold aisles Minimum of four feet separation between cold aisles and three feet between hot aisles 1 Precision air conditioning units located at the ends of each hot aisle. In practice, the airflow in a legacy data center is very unpredictable and has numerous inefficiencies, which proliferate as power densities increase. This is shown in Figure 1, where bypass air, recirculation, and air stratification are the dominant airflow characteristics throughout the data center. Figure 1. Bypass airflow, recirculation, and air stratification Bypass Airflow Bypass airflow is defined as conditioned air that doesn t reach computer equipment. 2 The most common form of bypass air occurs when air supplied from the precision air conditioning units is delivered directly back to the airconditioner intakes. Examples of this would be leakage areas such as air penetrating through cable cut-outs, holes under cabinets, or misplaced perforated tiles that blow air directly back to the air-conditioner intakes. Other examples of bypass airflow include air that escapes through holes in the computer room perimeter walls and non-sealed doors. In conventional legacy data centers, as little as 40 percent of the air delivered from precision air conditioning units may actually make its way to cool the existing IT equipment a great waste of energy as well as an excessive and unnecessary operational expense. Recirculation Recirculation occurs when hot air exhausted from rackmounted computing devices is fed into device inlets. This principally occurs in servers located at the highest points of a high-density enclosure. This is illustrated in Figure 2 by the large area shown in red. Recirculation can cause overheating damage to computing equipment and disruption to mission-critical services. 1 Recommendations as per ANSI/TIA/EIA -942, April 2005. 2 Reducing Bypass Airflow Is Essential for Eliminating Hotspots, by Robert F. Sullivan, Ph.D. 2

Figure 2. Recirculation Hot and Cold Remixing and Air Stratification Air stratification in the data center is the layering effect of temperature gradients from the floor to the ceiling of the computer room. In a raised-floor environment, air is delivered at approximately 55ºF from under the raised floor through perforated tiles. The temperature, as the air first penetrates the perforated tile, remains the same as the supply temperature, but as the air moves vertically up the rack s front face, air temperatures gradually increase. This occurs because insufficient airflow is delivered through the perforated tiles, which allow the hot air exhaust to penetrate the cold-aisle region. In high-density enclosures, it s not uncommon for temperatures to exceed 90ºF at the server inlets mounted at the highest point of the enclosure. However, the recommended temperature range for server inlets as stated by the American Society of Heating, Refrigerating, and Air Conditioning Engineers (ASHRAE s) Mission-Critical Facilities Technical Committee 9.9 is between 64.4ºF and 80.6ºF. Thus, in a legacy data center design the computer room is actually being over-cooled, by sending extremely cold air under the raised floor to compensate for the wide range of temperatures at the device inlets. Data Center Heat Source: Processor Performance the Need for Speed In our modern economy, the fact remains that companies need to maintain growth and profitability, which demands delivery of better, faster, richer, and more reliable products and services to remain competitive. Thus, constant need for speed reflects the modern day business compulsion to consume increasing levels of computing performance to maintain or attain a competitive advantage. However, until recently most IT departments never related this exponential growth of processing power to how it affects power consumption. The fact is the ratio of processor performance with respect to power has increased significantly over the last several years. In other words, the processor manufacturers have made some significant technology breakthroughs to increase the performance of the processor while consuming less power. The actual culprit of the power consumption issue is related to the exponential growth in power densities. Processor manufacturers such as Intel and AMD are making the processors smaller and denser, such that server manufacturers can incorporate a greater number of processors in a smaller footprint. Data collected by Intel Corporation has shown that current processor technology consumes 24 percent of the power consumption to execute the same workload in roughly the same time period as processor technology used in 1999. That s less than onequarter of the power consumption of less than a decade ago. However, the power density (the amount of electric power consumed by the computer chip) has increased by a factor of 16X during the same period, which creates the fundamental cooling problem from the chip throughout the critical computing environment. Virtualization software has also significantly reduced power consumption in the data center by taking advantage of underutilized processing power within each server; in effect consolidating many physical servers into one. However, although the total power consumption considerably decreases with virtualization, the power density per physical server increases. These strategies provide tremendous impact in reducing energy consumption; the challenge is that these technological advances come with a cost. With increasing power densities per cabinet, traditional computer room cooling designs cannot prevent server exhaust recirculation and thus become unreliable as a means of cooling. Closed-Loop Heat Containment: What is a Closed-Loop Design? The legacy data center is an open system where air is allowed to move freely throughout the data center. A closed-loop design is a solution whereby all the air supplied by the computer-room air conditioners is delivered to the intakes of the rack-mounted computing equipment and all the hot air exhaust is delivered directly back to the intake of the air-conditioning system. 3

There are essentially two current methods available for achieving a closed-loop design. Cold-air containment. A cold-air containment system is one in which the cold air supply from the computer room air-conditioning unit is isolated, and the hot air is allowed to move freely throughout the room. This can be done by completely isolating the cold aisle in the data center or using a ducted enclosed channel attached to the front of the enclosure that draws cold air directly to the server intakes. Heat containment. Heat containment is achieved by capturing all the hot air that is exhausted from the rackmounted computing equipment and directing it to the intake of the computer room air conditioner without any cold air contamination. This can be accomplished by enclosing the hot aisle or enclosures and having a heat-rejection system pump the heat from these contained units out of the data center. Conversely, a ducting system that directs the hot air from the rear of the rack enclosure to the air-conditioner intakes can also be used. Closed-Loop Heat Containment Solutions Closed-loop design is an adaptive concept built on the premise of providing customers with ease of deployment that integrates with existing infrastructure. Not unlike LEGO building blocks, once the foundation is created, all the other pieces fit together. Once an adaptable enclosure frame is installed there are several solutions available to the customer. Each solution has its benefits to meet the customer s requirements. Passive exhaust system. This containment system incorporates a chimney attached to the back of an adaptable frame enclosure. In this case, the chimney is designed over the rear corner of the rack to ensure access to overhead cable management such as ladder trays. The heat containment system relies on all the hot-air exhaust to be directed through the chimney, thus much attention has been placed on minimizing any air leaks in the cabinet. The rack must be deployed with a sealed solid back door and a cover must be used on other exposed areas to ensure the air exhaust does not leak outside the cabinet. The passive system is dependent on computing equipment exhaust fans to deliver enough volume of airflow to pass through the chimney. Thus, in a passive exhaust system, one needs to be cognizant of potential pressurized backflow with lowflow exhaust configurations. Assisted exhaust system. This heat containment design uses fans within the attached enclosure chimney to assist the airflow through the ducted vent. This system should be used in conjunction with a fan speed controller to optimize the airflow volume within the rack. One of the advantages of the assisted system is the ability to control the flow of air. If the server exhaust is not strong enough, air from the surrounding room or the plenum can enter into the rear rack, causing remixing. The key strategy in using an assisted exhaust-based system is to control the flow of air such that there is a slight negative pressure at the very top of the enclosure and a zero static pressure throughout the rest of the rear portion of the rack. This strategy will optimize airflow performance to ensure the heat is exhausted, eliminating the risk of backflow. Application of Heat-Containment System Closed-loop heat containment solutions are designed to adapt to existing infrastructures and provide a solution for greenfield applications. The application of heat containment systems increase efficiencies within the data center by reducing bypass airflow and recirculation, thus allowing the heat to flow directly to the air-conditioner intakes. To achieve heat containment with the active- or passiveducted exhaust option, the hot air exhaust that flows through the enclosure s chimney attachment must be used in conjunction with other facilities to continue the flow directly back to the precision air-conditioner intakes without remixing. There are effectively two methods to achieve this task. Extending the adaptable enclosure s rear duct to a plenum ceiling. A closed-loop system can be attained by extending the rear duct from the back of the frame to a drop-ceiling plenum and adding a ducted return from the ceiling plenum back to the air-conditioner intakes. If a drop ceiling is in place, it has the advantage of minimizing the costs of building out a dedicated ducted heat return. Direct-ducted exhaust return. If no plenum ceiling exists, it may be possible to duct the hot air exhaust directly back to the air-conditioner intakes. This has the advantage of providing a more controlled heating, ventilating, and air conditioning (HVAC) environment, since the air path is 100 percent dedicated to heat containment. 4

Quantifying Closed-Loop Efficiencies Recent articles make generalizations about an enclosure s ability to cool based on power densities within the rack. Specifically, the enclosure is essentially a passive device 3 that doesn t provide any cooling. Thermally, the function of the enclosure is to ensure that adequate airflow can be provided to computing equipment intakes and that the heat generated from the equipment is not trapped within the enclosure. However, with the recent increases in power densities and data center energy costs, the enclosure has evolved into a critical piece of the data center and now needs to be a part of an integrated strategy for achieving greater efficiencies. The foundations of closed loop design efficiency savings can be established by optimizing four conditions: 1. Provide consistent air temperature between 64.4 F and 80.6 F to all computing equipment (this is a statement of reliability as provided by ASHRAE TC9.9). 2. Ensure the air temperature leaving the server exhaust matches as closely as possible the intake temperature of the computer room air conditioner. 3. Make certain there is sufficient air flowing to the inlets of all the computing equipment. 4. Ensure the computer room is sealed as much as possible and avoid air leakages wherever they occur. In open legacy infrastructures, the only way to maintain ASHRAE s recommended temperature range for reliability in high-density environments is to oversupply the amount of cooling in the room. In some cases this can be as much as 50 percent of the necessary airflow required. Therefore, the cost of ensuring reliability comes by reducing the overall efficiency and significantly increasing the amount of bypass airflow in the room. On the other hand, by supplying only the necessary amount of volumetric airflow, the traditional data center cooling design cannot prevent the server exhaust from feeding back to the device inlets, thus reducing the reliability of the IT equipment. In a legacy data center, there is a tradeoff between efficiency and reliability; increasing one negatively affects the other. In a closed-loop heat containment system, because the hot and cold air streams are isolated, there is little effect on recirculation as the cooling supply is optimized to meet demand requirements and thus there is no reliability penalty when increasing efficiency. In traditional data centers, cold air is supplied from the precision air conditioners at very cold temperatures (approximately 55ºF). The reason the air is supplied at such cold temperatures is to counter the effects of high temperatures detected at the top of many enclosures caused by hot and cold air remixing. However, if the heat can be contained and not remixed with the cold air, there is no reason to supply such cold temperatures under the raised floor. Studies have shown that increasing the chilled water supply temperature from 45ºF to 55ºF, or by raising the air supply temperature to 65ºF, will achieve a 16 percent energy savings consumed by the chiller. 4 Closed-loop heat containment systems can further increase efficiency when combined with air-side economizers. During the appropriate seasonal conditions, outside air can be used to cool the data center as opposed to consuming large amounts of energy using mechanical cooling. Providing the outdoor environment has favorable humidification conditions, significant energy savings can be attained by taking advantage of the hours during the year that both the supply and return temperatures are higher than the outside air. In a legacy environment, the typical supply and return temperatures are 54ºF and 70ºF. In contrast, the closed-loop heat containment system could typically have supply and return temperatures between 68ºF and 95ºF. Depending on location, this can have a tremendous effect on the number of hours necessary to provide mechanical cooling in the data center. For example, a city such as Los Angeles can use economized cooling 86 percent of the hours in a year if 3 With the exceptions of enclosures that include internal heat exchangers 4 A Strategic Approach to Datacenter Cooling, by Dr. James Fulton, Associate Professor of Mathematics, Suffolk County Community College, Selden, New York. April 2008. 5

the supply temperature is above 70ºF. However, if the supply temperature is below 53ºF, it can only use full air side economization cycles 6 percent of the hours in a year. 5 Thus, substantial savings can be achieved by increasing the supply return temperatures and a closed-loop heat containment solution can effectively deliver the result, while ensuring the reliability of the IT equipment. About the Author Brent Goren, PE, is a data center consultant with Wright Line, where he provides technical expertise to assist clients in designing scalable, reliable, and efficient data centers. Brent has in-depth knowledge in both power and cooling in the data center and, most recently, has taken a lead role in building a practice surrounding computational fluid dynamics (CFD) modeling and airflow management. Brent has over 15 years experience working within IT environments in various roles and capacities, with the last three years prior to working with Wright Line dedicated to data center consolidation and relocation projects. Brent received his BA in Electrical Engineering, from the University of Manitoba. Mr. Goren can be reached at brent.goran@wrightline.com. About Wright Line Wright Line provides a wide range of innovative data center solutions developed through direct collaboration with its customers. From server enclosures and power distribution units (PDUs) to its new patent-pending heat containment system, Wright Line can help you improve your data center infrastructure efficiency (DCiE) and power usage effectiveness (PUE). Wright Line doesn t advocate a one-size-fits-all product development methodology, but rather a consultative, collaborative approach that maximizes your data center operations. Its industry-leading enclosures, coupled with its broad range of accessories, power distribution, keyboard/video display/mouse (KVM) switches and monitoring products, are designed to store, cool, power, manage and secure your mission-critical equipment. About the Uptime Institute Uptime Institute is a leading global authority on data centers. Since 1993, it has provided education, consulting, knowledge networks, and expert advisory for data center Facilities and IT organizations interested in maximizing site infrastructure uptime availability. It has pioneered numerous industry innovations, including the Tier Classification System for data center availability, which serves as a de facto industry standard. Site Uptime Network is a private knowledge network with 100 global corporate and government members, mostly at the scale of Fortune 100-sized organizations in North America and EMEA. In 2008, the Institute launched an individual Institute membership program. For the industry as a whole, the Institute certifies data center Tier level and site resiliency, provides site sustainability assessments, and assists data center owners in planning and justifying data center projects. It publishes papers and reports, offers seminars, and produces an annual Green Enterprise IT Symposium, the premier event in the field focused primarily on improving enterprise IT and data center computing energy efficiency. It also sponsors the annual Green Enterprise IT Awards and the Global Green 100 programs. The Institute conducts custom surveys, research and product certifications for industry manufacturers. All Institute published materials are 2009 Uptime Institute, Inc., and protected by international copyright law, all rights reserved, for all media and all uses. Written permission is required to reproduce all or any portion of the Institute s literature for any purpose. To download the reprint permission request form, uptimeinstitute.org/resources. Uptime Institute, Inc. 2904 Rodeo Park Drive East Building 100 Santa Fe, NM 87505-6316 Corporate Offices: 505.986.3900 Fax: 505.982.8484 uptimeinstitute.org 2009 Uptime Institute, Inc. and Wright Line 5 Data from Best Practices for Datacom Facility Energy Efficiency ASHRAE Series, ISBN 978-1-933742-27-4. 6