Retrofitting with passive water cooling at the rack level. 724-746-5500 0118 965 6000 blackbox.com www.blackbox.co.uk
Table of Contents Introduction... 3 Build or Renovate...... 4 Add Heat Density As You Grow... 4 Heat Transfer Door Cooling... 4 Cost Analyses...... 5 Retrofit Example... 5 University Case Study...... 7 Summary... 9 About Black Box...... 9 We re here to help! If you have any questions about your application, our products, or this white paper, contact Black Box Tech Support on 0118 965 6000 or go to www.blackbox.co.uk and click on Talk to Black Box. You ll be live with one of our technical experts in less than 30 seconds. 0118 965 6000 www.blackbox.co.uk Page 2
Introduction The proliferation of new, extended application development in software as a service, as well as new vitualisation technologies, has created a conflict in many data centres between today's outdated capabilities versus tomorrow's growing needs. There is no doubt that the cost of building and managing data centre resources continues to increase yearly. Likewise, the yearly cost to power and cool data centres makes up a majority of operating expenses. Legacy data centres waste at least 50% of the energy they consume managing heat generated by IT systems. Most data centres are not new; they are housed in buildings that are using practices that could be 20 years old and have not yet caught up with the latest trends in IT rack power densities. Fully populated racks can dissipate as much as 7 25 kw per rack. High-end servers can dissipate more than 40 kw per rack. This level of density requires data centres to provide power and cooling densities that exceed typical current capabilities. Furthermore, most legacy data centres have not been designed to use their maximum capabilities, best practices have not been implemented, and the cooling requirements of IT equipment have been considerably less than optimal. This has created a common situation, as identified by the Uptime Institute, where data centres consume 2.0 to 2.6 times the cooling required by the IT equipment, thus wasting energy and power and further reducing the amount of IT that can be housed in the structure. By implementing best practices and optimising the performance of the existing air cooling infrastructure, data centre operators can improve the performance of the specified cooling infrastructure to 70% efficiency. The question data centre owners must ask themselves is if their current air cooling is acceptable at 70% or if they can continue to sustain that performance as computing technologies push power and cooling beyond their current requirements. What can operators expect from their environment if cooling requirements exceed 12 kw to up to 25 kw per rack? 0118 965 6000 www.blackbox.co.uk Page 3
Build or Renovate With the adoption of virtualisation, cloud computing, and consolidation, most data centres that have reached the upper limit of energy consumption essentially have two choices: build new or renovate. New data centres built using performance-enhancing cooling, either with fully optimised air, liquid cooling, or hybrid systems have improved efficiency. But the cost to build a new data centre is prohibitive. According to the Uptime Institute, the total capital expense of building a new data centre ranges from 11,000 to 16,000 or more per kilowatt. The second alternative is renovation. While renovation presents issues with improving existing space and operating a data centre through a rebuilding project, retrofitting allows you to achieve what you want at a fraction of the cost. Add Heat Density As You Grow This paper proposes that by retrofitting existing data centres with passive water cooling at the rack level, data centre operators can increase IT rack power densities at a fraction of the cost of building a new data centre. Furthermore, retrofitting allows data centres to take a deliberate approach to improvement, adding resources on an as-needed basis rather than completing a full build-out of an entire space at one time. This approach using passive water-cooling at the rack level also enables upgrades with no disruption to IT operations while achieving a significant cost savings in energy consumption, not to mention the capital expense of constructing a new data centre. Heat Transfer Door (HTD) Cooling Passive heat transfer doors can neutralise heat directly at the source the rack. These modules replace standard rear doors on IT equipment racks. Server fans draw in air through the chassis, passing the air through the liquid-filled fin-and-tube coil assembly, which removes the heat before reentry into the data centre. An HTD uses specially designed coils that maintain airflow through the rack with a negligible resistance to the airflow. An HTD can sensibly cool up to 35 kw of heat per rack. Taking up a minimum of floor space, the HTD is a flexible, efficient, and space-saving cooling solution for data centres. In a retrofit, an HTD system may be fed by a redundant pumping system, a Coolant Management System (CMS) that creates a secondary loop that controls the heat exchanger temperature to avoid water condensation risks. Being passive, the energy consumed by such a system is negligible, which creates a significant energy savings compared to air-driven cooling. There are also instances where existing facilities can be retrofitted without using a CMS, where the new cooling system can be connected directly to a chiller. Figure 1 shows a typical system layout. This system can remove as much as 35 kw when used at the recommended new American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) conditions. It has been designed to minimise any pressure drop compared to the existing perforated door, resulting in no meaningful temperature rise inside the rack. IT equipment tests by IBM and Dell have shown less than a 1.8 F (1 C) increase in computer CPU junction temperature, which is negligible. Figure 1 0118 965 6000 www.blackbox.co.uk Page 4
Figure 2 is a photograph of a Passive HTD. HTD units work with a number of currently available enclosures using transition frames. Each HTD can easily be fitted to an enclosure within minutes without disrupting IT operations. Cost Analyses Figure 3 compares the cost to retrofit a data centre to the cost to build a new one. This analysis is based on data derived from customers and the Uptime Institute. DataSite Orlando is a managed services data centre that is housed in a former telco site. The facility was designed to optimise power and cooling efficiency, using best practices to improve Figure 2 its performance in its conventionally cooled areas. In its higher-density areas DataSite Orlando uses passive liquid cooling, which, in addition to enabling higher density loads, freed up 75 percent of the centre to white space. DataSite now has the ability to cool up to 200 watts per square foot with standard air cooling and up to 600 watts per square foot using passive liquid cooling. DataSite Orlando is the flagship property in BURGES property + company s high-tech real estate portfolio. According to Jeff Burges, president and founder of BURGES property + company, increasing heat density in an existing structure using conventional cooling can be accomplished in a two-step process. First, the existing structure can be upgraded and optimised using best practices. In this example, a 10,000-square-foot, 350-kW datacenter is looking to increase capacity to 1.5 megawatts. Two options exist: Upgrade existing facility and increase density from 35 watts/sf to 150 watts/sf; OR Build a new facility. In an upgrade scenario, the cost to upgrade power and cooling infrastructure supporting the existing 350 kw load is estimated at 700 per kilowatt. The power and cooling infrastructure for the new load of 1,150 kw is estimated at 7,000 per kw (assuming no new space construction cost). The combined cost totals over 8M. In comparison, building a new data centre with an IT load of 1.5 megawatts at a cost of 8,000 would total nearly 12M over 30% more than the cost of retrofitting. In addition, time, planning, and market risk significantly increase under a new build scenario. Retrofit Example Retrofitting and densifying existing facilities is most economical Cheaper and faster Ability to use state-of-the-art cooling at lower cost Current Desired Desired 35W/ft 2 (10,000 ft 2 ) 350 kw 350 kw 125W/ft 2 (10,000 ft 2 ) 900 kw 900 kw 125W/ft 2 (2800 ft 2 ) 125W/ft 2 (7200 ft 2 ) Figure 3 0118 965 6000 www.blackbox.co.uk Page 5
Figures 4 and 5 illustrate the comparison between each cooling option. An added benefit of using passive, localised liquid cooling is a redistribution of space condensing the space requirement for IT into 25 percent of the existing footprint. Comparisons are based on very conservative estimates. Retrofit Costs Upgrade Existing Power New Power Infrastructure Totals kw addressed 350 1150 1500 Cost to upgrade/add existing power per kw 700 7,000 Total cost 245,000 8,050,000 8,295,000 New Build Costs Retrofit saves approx. Total load (kw) 1500 20% New power + cooling costs 8,000 over new build. Total P&C costs for new build* 12,000,000 *Excludes cost/kw for non-p&c infrastructure: e.g., land, building, etc. Estimated to be 190/SF Key assumptions Spec new Scenarios Current Target Increase New data centre construction for power + cooling engine (Uptime Institute) 8,000 Local/SF 100 500 400 Infrastructure portion of total cost 10% DC critical area 10,000 10,000 Infrastructure cost in pounds 800 Total load (kw) 1000 5000 4000 Power + cooling 7,000 Implied kw per rack 3.0 15.0 12 Existing power reconfiguration cost 315 Cooling retrofit cost 375 } 690 Figure 4 In larger data centres, this comparison is equally compelling as shown in Figure 5: Retrofit Costs Upgrade Existing Power New Power Infrastructure Totals kw addressed 1000 4000 5000 Cost to upgrade/add existing power per kw 700 7,000 Total cost 700,000 28,000,000 28,700,000 New Build Costs Retrofit saves Total load (kw) 5000 26% New power + cooling costs 8,000 over new build. Total P&C costs for new build* 40,000,000 *Excludes cost/kw for non-p&c infrastructure: e.g., land, building, etc. Estimated to be 190/SF Key assumptions Spec new Scenarios Current Target Increase New data centre construction for power + cooling engine (Uptime Institute) 8,000 Local/SF 100 500 400 Infrastructure portion of total cost 10% DC critical area 10,000 10,000 Infrastructure cost in pounds 800 Total load (kw) 1000 5000 4000 Power + cooling 7,000 Implied kw per rack 3.0 15.0 12 Existing power reconfiguration cost 315 Cooling retrofit cost 375 } 690 Figure 5 0118 965 6000 www.blackbox.co.uk Page 6
University Case Study A large midwestern university's advanced computing centre provides high-performance computing and storage for cutting-edge science, engineering, and social science research across the school. It needed additional cooling capacity and power distribution to support a new 1200-node HPC cluster. Data centre power, space, and chilled water capacity constraints eliminated the option of selecting traditional precision air conditioning as an alternative. The university selected passive, liquid cooling using heat transfer door technology as the basis for the cluster expansion cooling solution. The design consisted of 50 rear door heat exchangers, which act as passive radiators to cool the exhaust air from each enclosure. The units were connected by hoses to five 150-kW Coolant Management Systems (CMS), and the CMS transferred the heat to the building s primary chilled water loop. The data centre faced a number of challenges when evaluating cooling methods. First, the facility is housed below-ground, with limited overhead space and shared chiller resources. The centre has only shallow raised floors and therefore the air is delivered from the ceiling. This cooling method was not uniformly distributed. Before increasing the server density, the data centre consumed 500 kw, which did not include the power to run the 12 traditional 20- and 30-ton computer room air conditioners needed to cool the data centre. Once densified, the data centre was able to increase the compute power 888 kw, and eliminate three of the computer room air conditioners from the area where the rear door heat exchangers were installed. Retrofitting the existing data center dropped the capital expense from a potential 40M for a new data centre to about 625,000 for the retrofit. Since the heat transfer doors operated at above dew point, the selection also eliminated the need for additional pumps and systems to remove condensation. Each CMS expends little energy, consuming no more than 2.5 kw. From a cost perspective, given an electric rate of.003 kwh, the cost to operate the CMS, the data centre saves more than 80,000 in operating expenses over a five-year period. With utility prices increasing, the operating expense savings could top 125,000 in ten years. In addition these examples don t take into account the expense associated with unused cooling capacity or power. Eliminating unused, excess cooling and power, which is typical in many older data centres, makes the financial benefits of the retrofit even more significant. The following charts illustrate a generic example comparing HTD cooling to both non-passive in-row cooling and traditional Computer Room Air Conditioners (CRACs). In many cases, the initial capital cost of the HTD solution is comparable to other solutions. Given the significantly lower power consumption and on-going maintenance of the HTD, operating costs are significantly lower. Therefore, when reviewing the total cost of ownership, the HTD is a very compelling solution. Figures 7 and 8 show a typical example comparing the solutions and the total cost of ownership or TCO, which equals Day 1 capital + cumulative operating expense (OPEX) over the time horizon shown. In this example, the assumed load per rack is 12 kw. The capital expense (CAPEX) costs are likely understated in the CRAC scenario given that containment costs in a 12-kW/rack environment would be more expensive. In addition, a taller raised floor will likely be necessary (most traditional raised floors would not be able to accommodate the airflow needed to cool 12 kw/rack). In addition, the energy cost assumption is today s UK. average of roughly 0.06/kW. Most urban areas are more expensive, and the model does not reflect any energy cost increases in the future years. All this would make CRACs and higher power consuming in-rows even less attractive than shown below. Figure 6: Assumptions/output of generic example: Key assumptions RDHx CRAC In-row, no HACS Number of Racks 28 28 30 Cooling Units (RDHx/CRAC/In-rows) 28 8 38 CDUs/Manifolds needed 5 5 Cooling equipment footprint (sq. ft.) 75 320 518 Cooling equipment power consumption (kw) 13 64 42 Total square feet needed if new 775 840 1268 Power consumption Power draw RDHx/CRAC/In-rows (kw per unit) 8.0 1.1 Power draw CDUs and Manifolds (kw per unit) 2.5 Total power draw all units (total kw) 13 64 42 0118 965 6000 www.blackbox.co.uk Page 7
Figure 7: Capital Expenses and Total Operating Expenses 375,000 CAPEX ANNUAL OPEX 75,000 312,500 62,500 250,000 50,000 187,500 37,500 125,000 25,000 62,500 12,500 CAPEX 0 $0 OPEX HTD CRAC/CRAH In-row (no HACs) HTD CRAC/CRAH In-row (no HACs) Figure 8: Total Cost of Ownership 625 TCO 500 375 250 125 CAPEX 0 Initial cost Yr 1 Yr 2 Yr 3 Yr 4 HTD CRAC/CRAH In-row (no HACs) 0118 965 6000 www.blackbox.co.uk Page 8
Summary Using passive liquid cooling opens options in retrofitting for optimised performance, a comparable initial cost compared with traditional cooling options, and the opportunity to increase cooling capacity without disrupting operations. Long-term, liquid cooling is significantly more cost-effective to data centre operators. The question of passive liquid cooling isn t if, but when. 0118 965 6000 www.blackbox.co.uk Page 9