Virginia Tech. Background. Case Summary. Results



Similar documents
IT White Paper MANAGING EXTREME HEAT: COOLING STRATEGIES FOR HIGH-DENSITY SYSTEMS

Blade Servers and Beyond: Adaptive Cooling for the Next Generation of IT Systems. A White Paper from the Experts in Business-Critical Continuity

The New Data Center Cooling Paradigm The Tiered Approach

How To Run A Data Center Efficiently

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Liebert : The Right Cooling For IT Environments. Precision Cooling For Business-Critical Continuity

Liebert CRV Row-Based Cooling Intelligent Precision Cooling For Data Center Equipment. Precision Cooling For Business-Critical Continuity

Five Strategies for Cutting Data Center Energy Costs Through Enhanced Cooling Efficiency

Data Center Precision Cooling: The Need For A Higher Level Of Service Expertise. A White Paper from the Experts in Business-Critical Continuity

Protecting Our Information, Protecting Our Resources

Reference Guide: SmartDesign Approach: Common Data Center Challenges:

Data Center Rack Systems Key to Business-Critical Continuity. A White Paper from the Experts in Business-Critical Continuity TM

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

SmartDesign Intelligent, Integrated Infrastructure For The Data Center. Solutions For Business-Critical Continuity

RagingWire Enterprise Solutions

DCIM Readiness on the Rise as Significant Data Center Capacity Remains Unused. A Research Report from the Experts in Business-Critical Continuity TM

Rittal Liquid Cooling Series

How To Save Energy On A Power Supply

Energy Efficient Cooling Solutions for Data Centers

Power Management Strategies for High-Density IT Facilities and Systems. A White Paper from the Experts in Business-Critical Continuity TM

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

Liebert CW kW Chilled Water Cooled Precision Air Conditioning For Data Centers

Microsoft Technology Center: Philadelphia

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001

Data Center Infrastructure Management Managing the Physical Infrastructure for Greater Efficiency

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

How To Improve Energy Efficiency Through Raising Inlet Temperatures

Power and Cooling for Ultra-High Density Racks and Blade Servers

Knürr CoolAdd The universal retrofit solution against overheating in server racks. Rack & Enclosure Systems

Liebert CRV Row-Based Cooling Intelligent Precision Cooling For Data Center Equipment

DataCenter.BZ. Background. Case Summary. & dark fiber solutions

Enterprise Remote Monitoring

How To Improve Energy Efficiency In Data Center Management

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers

Data Centre/Server Room Containment: How to Get More Cooling Capacity without Adding Additional AC

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Managing Cooling Capacity & Redundancy In Data Centers Today

W H I T E P A P E R. Computational Fluid Dynamics Modeling for Operational Data Centers

Data Center Infrastructure Management Managing the Physical Infrastructure for Greater Efficiency

Choosing Close-Coupled IT Cooling Solutions

Technical Note: Comparing R407C and R410A as Alternatives for R22. A Technical Note from the Experts in Business-Critical Continuity

Data Center Rack Level Cooling Utilizing Water-Cooled, Passive Rear Door Heat Exchangers (RDHx) as a Cost Effective Alternative to CRAH Air Cooling

Benefits of. Air Flow Management. Data Center

APC APPLICATION NOTE #112

Economizer Fundamentals: Smart Approaches to Energy-Efficient Free-Cooling for Data Centers

Data Center Infrastructure Management

International Telecommunication Union SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF TELECOMMUNICATION CABLES IN PUBLIC NETWORKS

Protecting The Critical IT Infrastructure of the United States

Airflow Simulation Solves Data Centre Cooling Problem

- White Paper - Data Centre Cooling. Best Practice

Reducing the Annual Cost of a Telecommunications Data Center

Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED.

Center Thermal Management Can Live Together

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers

BRUNS-PAK Presents MARK S. EVANKO, Principal

Data Center 2020: Delivering high density in the Data Center; efficiently and reliably

In Row Cooling Options for High Density IT Applications

Power and Cooling Best Practices for IP Communications. A White Paper from the Experts in Business-Critical Continuity

Benefits of Cold Aisle Containment During Cooling Failure

Cabinets 101: Configuring A Network Or Server Cabinet

Optimum Climate Control For Datacenter - Case Study. T. Prabu March 17 th 2009

Knürr DCL Modular Rack Cooling from 6 kw to 60 kw

Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER

HPC TCO: Cooling and Computer Room Efficiency

Best Practices for Wire-free Environmental Monitoring in the Data Center

Strategies for Deploying Blade Servers in Existing Data Centers

Data Center Power Consumption

Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc.

Cooling Audit for Identifying Potential Cooling Problems in Data Centers

Fundamentals of CFD and Data Center Cooling Amir Radmehr, Ph.D. Innovative Research, Inc.

How to Build a Data Centre Cooling Budget. Ian Cathcart

Application of CFD modelling to the Design of Modern Data Centres

Liebert Hiross HPW The High Performance Wallmount Cooling Solutions for Telecom Mobile Remote Access Nodes

Data Center Design Guide featuring Water-Side Economizer Solutions. with Dynamic Economizer Cooling

Liebert EFC from 100 to 350 kw. The Highly Efficient Indirect Evaporative Freecooling Unit

How to Meet 24 by Forever Cooling Demands of your Data Center

DISCONTINUED PRODUCT. Infrastructure Management & Monitoring For Business-Critical Continuity. Liebert icom-do. User Manual

Ten Steps to Increasing Data Center Efficiency and Availability through Infrastructure Monitoring

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

How To Improve Energy Efficiency In A Data Center

The High Performance Floormount Indoor Package Cooling Solution. Precision Cooling for Business-Critical Continuity

Data Center Equipment Power Trends

How Row-based Data Center Cooling Works

Precision Cooling for Business-Critical Continuity. Liebert PCW. Cool the Cloud

Data Center Energy Profiler Questions Checklist

The Effect of Regular, Skilled Preventive Maintenance on Critical Power System Reliability

Thermal Storage System Provides Emergency Data Center Cooling

Energy-efficient & scalable data center infrastructure design

The State of Data Center Cooling

Transcription:

Background Virginia Tech When Virginia Tech initiated plans to build a supercomputer by clustering hundreds of desktop computers, it had ambitious goals for the performance of the new system and a very aggressive schedule. The final plan involved a cluster of 1,100 64-bit Power Mac G5 dual-processor computers. If successful, this cluster would significantly expand the university s capabilities and enable it to simulate the behavior of natural or human-engineered systems, rather than relying on observation or physical modeling. But the cluster created significant cooling challenges that could not be addressed through traditional approaches to cooling. Case Summary Location: Blacksburg, Virginia Products/Services: Liebert XD cooling system Liebert Foundation racks Critical Needs: Provide focused high-capacity cooling for supercomputer cluster. Virginia Polytechnic Institute and State University, popularly known as Virginia Tech, is a comprehensive university of national and international prominence. With about 25,000 full-time students and the home of groundbreaking research, Virginia Tech produces worldclass scholarship in a challenging academic environment. Results Eliminated the need to build a new data center for the supercomputer. Optimized energy efficiency and space utilization. Fast response from Liebert and other partners enabled the project to stay on an aggressive schedule and qualify as the world s third-fastest supercomputer.

The Situation The supercomputer designed by Virginia Tech computer scientists used 1,100 Macintosh G5 computers clustered using Infiband technology from Mellanox TM Technologies for primary communications and Cisco switches for secondary communications. The cluster also used a software program called Déjà vu, developed at Virginia Tech, to solve the problem of transparent fault tolerance, enabling large-scale supercomputers to mask hardware, operating system and software failures. The project was not only significant to Virginia Tech. It created a new model for supercomputer development that could bring supercomputer power within reach of organizations that had not previously been able to afford it. But the challenge the scientists at Virginia Tech were unable to address was keeping the cluster cool. The cooling solution for such a high-density room required a level of innovation along the lines of the university s supercomputer itself. Inadequate cooling would result in processors and switches operating in dangerously high temperatures, reducing performance and life span. Emerson Network Power was prepared to address this challenge through its Liebert family of cooling technologies. Liebert power and cooling specialists played a key role in helping the university understand the magnitude of the challenge, evaluate alternatives and configure the room for optimum cooling. Liebert really helped open our eyes to the challenge of heat removal on this project, says Kevin Shinpaugh, director of research and cluster computing at Virginia Tech. When you are pioneering a new approach as we It was clear that traditional approaches to cooling would not be sufficient by themselves. Fortunately, Liebert had a solution. Kevin Shinpaugh, director of research and cluster computing, Virginia Tech were, you are in uncharted waters. Liebert was able to bring a detailed understanding of the challenge as well as a very well-developed solution. Working with preliminary project specifications, Liebert specialists analyzed equipment power consumption and projected heat loads. Then, using computer air flow modeling, they determined the optimum arrangement of racks and began to model different cooling-system configurations. Each configuration was based on the hot aisle/ cold aisle approach to tile placement and rack arrangement. In this approach, cold aisles have perforated floor tiles; the hot aisles do not. Racks are arranged front-to-front so the air coming through the tiles is pulled in the front of the rack and exhausted out the back into the hot aisle.

Once different approaches had been reviewed, the specialists analyzed two configurations in depth: 1. Relying exclusively on traditional precision air conditioning units; specifically, the Liebert Deluxe System/3. 2. Creating a hybrid solution that combined traditional computer room air conditioning units with supplemental cooling provided by the Liebert XD system. The Traditional Approach to Cooling Based on the data available, the Liebert specialists projected a sensible heat load of 1,972,714 Btu/hr for the 3,000-square-foot facility. Assuming this heat load, the room would require a total of nine Liebert 30-Ton Deluxe System/3 precision air conditioners seven primary units and two backups. The seven Deluxe units would generate a total sensible capacity of 2,069,200 Btu/hr and 106,400 CFM. Based on these CFM requirements, the facility would need 236 perforated floor tiles, totaling 944 square feet (based on normal per-tile airflow of 450 CFM). These tiles would cover nearly one third of the total floor space. The engineers then evaluated the effect of raising the floor from 18 inches to 40 inches to equalize airflow in the room. This change improved the situation, but still resulted in less than optimum cooling. A 40-inch raised floor was impractical because of physical limitations of the building. The Hybrid Solution Precision air conditioning units such as the Liebert Deluxe System/3 have provided effective cooling in thousands of data centers around the world; however, high-density systems, such as the Virginia Tech supercomputer, are stretching these systems beyond their practical capacity. In response, Liebert developed an alternate approach that uses fewer room-level precision air conditioners, reducing pressures under the floor, and supplements these units with rack-based cooling systems. Airflows were then analyzed using the TileFlow modeling program, assuming an 18-inch raised floor. With this configuration, the volume of air being pumped under the floor created extremely uneven airflows from -70 CFM to 1,520 CFM. In addition, under-floor pressures and velocities demonstrated significant tumult beneath the floor, reducing cooling system efficiency and creating the potential for hot spots that could damage the computers and reduce availability. Airflow analysis across one row of racks using floor mount precision air conditioners with 18-inch raised floor.

Liebert specialists analyzed airflow from two 20-ton Liebert Deluxe units and confirmed that the two units achieved a more uniform air flow; variance within the room was reduced from more than 400 CFM to less than 100 CFM. With uniform airflow established, Liebert specialists calculated the optimum configuration of supplemental cooling systems using the Liebert XD system. Liebert was one of the few companies that had developed a viable cooling solution for high-density computer configurations, says Patricia Arvin, associate vice president, information systems and computing, Virginia Tech. It was clear that traditional approaches to cooling would not be sufficient by themselves, adds Shinpaugh. Fortunately, Liebert had a solution. The Liebert XD is a flexible, scalable and waterless solution to supplemental cooling that delivers sensible cooling of heat densities higher than 500 Watts per square foot. The Liebert XD family uses an environmentally friendly coolant to achieve high efficiencies and waterless cooling. The coolant is pumped as a liquid, converts to a gas within the heat exchangers, and then is returned to the pumping station, where it is re-condensed to liquid. The system achieves high efficiencies by placing the cooling system close to the heat source and utilizing the heat absorption capacity of a fluid changing state. The Liebert XD family includes vertical rack-mounted fan coils (Liebert XDV modules) and ceiling-mounted fan coils (Liebert XDO modules). Based on available Liebert has been an extraordinary partner in this endeavor. It s rare to find a company that can act so quickly and be so agile in responding to a customer s needs. It s like having a Ferrari custom designed and built with the efficiency of a Toyota. We are very impressed. Patricia Arvin, associate vice president, information systems and computing, Virginia Tech ceiling height, Liebert recommended the Liebert XDV for both hybrid configurations presented to Virginia Tech. One Liebert XDV cooling module was recommended for every other rack. This configuration provides ample cooling capacity to meet initial requirements, with the ability to expand capacity should heat densities increase as a result of changes to, or expansion of, the supercomputer. The Liebert XD system can utilize its own chillers or connect to the building chilled water system. Both options were analyzed for Virginia Tech. Liebert and Virginia Tech selected reciprocating air-cooled chillers, which supply chilled water to the Liebert XDP Heat Exchanger/Pumping module.

The Solution Based on the detailed analysis conducted by the Liebert engineers, it was clear that the combination of two Liebert 20-ton Deluxe System/3 room air conditioners and 48 Liebert XDV supplemental cooling modules delivered the optimum cooling solution for the supercomputer facility. To further ensure proper performance of the cooling system, Liebert custom-built the racks for the Power Macs. In addition, Liebert provided the three-phase UPS system that provides backup power and conditioning for the supercomputer cluster. The Results Within less than three months of receiving the preliminary specifications for the room, Liebert had analyzed multiple cooling system configurations, developed the optimum room layout and cooling solution, custom designed racks to equipment requirements and provided the power and cooling equipment for the new supercomputer. And, the results have been impressive. In its first year of operation, the Virginia Tech supercomputer achieved speeds of 10.28 teraflops, making it the third-fastest supercomputer in the world. Liebert has been an extraordinary partner in this endeavor, says Arvin. They worked with us to find a solution to every problem we ve encountered, and they have done it in record time. It s rare to find a company as large as Liebert that can act so quickly and be so agile in responding to a customer s needs. It s like having a Ferrari custom designed and built with the efficiency of a Toyota. We are very impressed. But then, Virginia Tech has done no less itself by building one of the most advanced super computers in the world, using basic desktop components. According to the New York Times, the computer was put together in a virtual flash. Emerson Network Power. The global leader in enabling Business-Critical Continuity. EmersonNetworkPower.com AC Power Embedded Computing Outside Plant Racks & Integrated Cabinets Connectivity Embedded Power Power Switching & Control Services DC Power Monitoring Precision Cooling Surge Protection While every precaution has been taken to ensure accuracy and completeness in this literature, Liebert Corporation assumes no responsibility, and disclaims all liability for damages resulting from use of this information or for any errors or omissions. Specifications subject to change without notice. 2006 Liebert Corporation. All rights reserved throughout the world. Trademarks or registered trademarks are property of their respective owners. Liebert and the Liebert logo are registered trademarks of the Liebert Corporation. Business-Critical Continuity, Emerson Network Power and the Emerson Network Power logo are trademarks and service marks of Emerson Electric Co. 2006 Emerson Electric Co. 1206