I-STUTE Project - WP2.3 Data Centre Cooling Project Review Meeting 4, Lancaster University, 2 nd July 2014
Background Data centres estimated to use 2-3% of total electricity consumption in the UK and generate 3.3 million tonnes of CO 2 annually. Data centre energy use and emissions projected to quadruple by 2020 without significant efficiency improvement Typically, approx. 50% of data centre energy is used for cooling and humidification Extreme energy saving measures have resulted in as little of 7% of total energy being used for purposes other than IT. These methods are not feasible in all cases, however, there is potential for large energy savings Use of energy for IT load is also generally inefficient, due to: (i) resilience measures; (ii) operating at low IT loads. There is therefore potential for large energy savings in IT power usage 2
Plan for data centre project Phase 1 (July 2013 Oct 2014) - Development of roadmap: Tasks: (1) Study of data centre industry gathering information, networking/identification of key players, current areas of research interest and review current data centre industry (2) Identify new energy/carbon saving cooling technologies and strategies for use in data centres, and establish methods for evaluating and scoring new technologies (3) Review and evaluate each technology/strategy against defined criteria (4) Identify technologies for detailed study/development in second phase of project (5) Produce roadmap document Phase 2 (Oct 2014 July 2016) Detailed study of selected technologies 3
Overview of activities undertaken to date Gather information on data centres e.g. types, sizes, layout, operation, current technologies, potential future technologies Networking attending data centre industry events, identifying key players, establishing contacts with data centre operators, manufacturers/suppliers and other research teams Use of data centre modelling software (Romonet) to predict energy use, emissions and costs for a conventional data centre, to develop baseline case Define criteria for evaluating current and future data centre cooling technologies Use of exergy analysis for rating of the energy performance of different data centre cooling approaches and potential for energy recovery 4
Data centre cooling approaches Air based (conventional) Advantages Effective. Fans, air conditioners and chillers. New: free cooling and evaporative cooling, higher operating temperatures Disadvantages Low heat carrying capacity, large volumes, costly equipment, inefficient Water based Advantages High heat capacity, pumped, small volumes, efficient, low energy input Disadvantages Incompatible with electronics, only recently used in data centres Refrigerant based Advantages Electronics compatible, high heat carrying capacity, particularly 2-phase. Pumped system low energy input Disadvantages not much experience of use in data centres 5
Data centre cooling technologies Air: (i) Traditional use of CRACs, CRAHs and chillers around perimeter of room, random layout of racks Improved efficiency air cooled systems: (ii) raised floor + hot/cold aisle (iii) in-row cooling (iv) contained hot or cold aisle (v) air side economiser (vi) direct air free cooling, (vii) adiabatic free cooling (viii) direct evaporative (ix) indirect evaporative (x) water side economiser Water: (i) Direct on-chip water cooling (ii) Conduction cold plate cooling of server (iii) Rear door water cooled rack system Refrigerant: (i) Immersion cooling of server boards (ii) Spray cooling of chips (iii) Direct on-chip 2-phase pumped (iv) Direct on-chip 2-phase VC system Future/blue sky: (i) Thermoelectric (ii) Thermionic tunnelling (iii) Thermoacoustic (iv) Stirling coolers (v) Air cycle (vi) Liquid air engine (vii) Ionic wind (viii) Porous media 6
Typical air-cooled data centre configuration Energy flows in data centres Energy (electrical and mechanical) inputs, heat outflows and typical temperatures Main aim of conventional data centre cooling is to remove heat from vicinity of microprocessors and reject to outside ambient air 7
Sources of heat in server racks in data centres Heat generated in data centres is not just from microprocessors Typical power consumption/heat generation pattern for a data centre server rack Source: Intel 8
Potential for heat energy recovery from server components Parameter Processors Memory PCI Drives Motherboard PSU Fans DC loss Standby % Power consumption Operating temperature Carnot efficiency (T c =20 C) Recoverable energy (100kW input) 30% 11% 3% 6% 3% 25% 9% 10% 2% 70 C 70 C 30 C 45 C 40 C 50 C 30 C 40 C - 0.15 0.15 0.03 0.08 0.06 0.09 0.03 0.06-4.5 kw 1.65 kw 0.09 kw 0.48 kw 0.18 kw 2.25 kw 0.27 kw 0.60 kw - Main components from which heat could be recovered are: (i) Processors; (ii) PSU; (iii) Memory 9
Quality of waste heat from data centres Data centres generate very large amounts of heat energy. This heat is generally transferred to the surrounding environment and wasted This waste heat should be regarded as an energy source and exploited To determine the potential for re-use of waste heat dissipated from a data centre, an exergy analysis is needed Different cooling methods/technologies produce different temperature heat output streams with different exergies i.e. qualities It is planned to categorise a range of data centre cooling technologies in terms of both energy saving potential and exergy maximisation of waste heat streams. 10
Exergy and degradation of energy Exergy measures the quality of a given energy source. It is defined as the maximum potential of that energy source for doing work Electricity has an exergy value close to 100%. However, heat generally has a lower exergy value that is related to its temperature Each process taking place in the data centre e.g. conversion of electricity to heat, results in a loss of exergy Change in exergy (for a closed system) is given by: X U U P V V T S S 2 1 1 2 1 0 2 1 m 2 1 2 mg Change in Change in Work done Exergy Change in Change in exergy internal destroyed kinetic potential energy energy energy z 2 z 1 11
Results of preliminary exergy analysis Ambient temperature of 30 C (303K) assumed: Cooling medium Cooling method Chip temperature Exergy remaining after electrical energy converted to heat Heat Energy transfer from chip to coolant (1st law efficiency) Exergy recovered in coolant (2nd law efficiency) Net exergy destroyed per kw IT Exergy recovered per kw cooling Air Fan 60 C 9.0% 93% 1.1% 1.01 0.32 Air Fan +CRAC 85 C 15.3% 93% 21.0% 0.90 0.60 Water Pump 60 C 9.0% 93% 5.0% 0.93 1.47 Water Pump 75 C 12.9% 83% 7.7% - - Water Pump 85 C 15.3% 69% 8.4% 0.86 2.64 Refrigerant Pump 85 C 15.3% 99.7% 8.7% 0.83 12.1 Refrigerant VC 85 C 15.3% 92.5% 14.8% 0.86 0.79 12
Aims of heat recovery from data centres Need to maximise temperature of waste heat stream to enable greatest range of applications Could use heat pump to boost waste heat temperature but is this energy efficient? Ideally, want to recover energy in excess of that used for cooling method Probably best to use waste heat directly 13
Data centre waste energy recovery technologies Waste heat in Vapour Generator Turbine Shaft Work Pump Organic Rankine Cycle Condenser Heat out Waste heat driven absorption chiller Kyoto Wheel Other waste heat uses include: domestic and industrial space and water heating, district heating, desalination, biomass processing, piezoelectrics and thermoelectrics 14
Options for efficient heat recovery from server components Key to viability of waste heat recovery from data centres is to maximise coolant temperature. This is likely to be best achieved by liquid cooling (minimise T) Use of porous media evaporator Refrigerant could either be pumped or used with vapour compression system Use of microchannel evaporator Again refrigerant may be either pumped or used in vapour compression system 15
Next steps Finalise evaluation method for assessing new technologies for roadmap Evaluate and score energy/carbon saving technologies against defined criteria. Produce roadmap document, including: - review of data centre industry - detailed description of each of the technologies evaluated - potential for waste heat and energy recovery Identify technologies for detailed study/development in second phase of project 16
Timescales for data centre project Activities Development of roadmap Duration July 2013- Oct 2014 Milestones Finalise evaluation method Jun 2014 Final report/roadmap Oct 2014 Detailed study of selected technologies Oct 2014 - July 2016 Interim report May 2015 Interim report November 2015 Final report - July 2016 Recommendations July 2016 17
Options for dissemination of results of project Present results at data centre industry conferences e.g. Uptime Institute conference in USA, Data Centre Dynamics Conference UK, Europe or USA Present results at other relevant industry forums e.g. IMechE, CIBSE, IOR, SIRACH Journal papers e.g. ASHRAE or CIBSE journals 18