Energy Efficiency in New and Existing Data Centers-Where the Opportunities May Lie Mukesh Khattar, Energy Director, Oracle National Energy Efficiency Technology Summit, Sep. 26, 2012, Portland, OR
Agenda Growing energy use in data centers Oracle s drive in energy efficiency Drivers for efficiency in data centers Efficiency opportunities in the cooling chain in data centers
Growing Energy Concerns Estimates may vary, but all point to ever increasing energy use Recent NY Times article Power, Pollution and the Internet emphasized Growing Power Use and Inefficiencies 110B in Data Centers Annual US Data Center Energy Usage 110B Billions of Kilowatt Hours 60B 80B 2006 2008 (E) 2010 (E) Source: EPA Source: EPA
Growing Energy Use in Data Centers Oracle U.S. owned data centers and buildings, including acquisitions Data Centers 44% Buildings 28% Data Centers 66% Buildings 17% Computer Labs 17% Computer Labs 28% 2003 2008 *Computer Labs energy use is estimated
Increased use draws potential regulations carrot & stick EPA Energy Star Monitors Power supplies (85+) UPS Servers Storage Entire data centers (pilot underway) Industry response Climate Savers Compute Initiative The Green Grid ASHRAE 90.1 EIA/ITI/SNIA
Oracle: Industry Leader in Energy Efficiency Drives First software company to participate in the U.S. EPA Climate Leaders Program Member of the U.S. EPA Green Power Partnership Program Green Grid Founding Member (Sun) Member of the Climate Savers Initiative Participate in Demand Response Programs Carbon Disclosure Project Oracle HQ campus, CA
Energy efficiency generates real $avings >$2M savings/year buildings >$3M savings/year data centers Oracle corporate emphasis on green Energy efficiency cheaper than green or solar power
70 0 F 90 0 F Data Center Airflow Management Applied at Austin Data Center during expansion in 2004 Rack Hot Air Enclosure Eliminated excess air flow Permitted use of supply fan VFDs Supply air temperature raised to 68 F Return air temperatures is in 80 s F 60 0 F
Typical supply air temperature along rack height without any containment
Notable Achievements Austin Data Center (ADC) Support Power Ratio reduced to 0.41 (or PUE of 1.41) Participating in EPA Energy Star Pilot ADC is EPA s Green Power Partner for purchasing Green Power Energy savings ~$2M/year at ADC Additional savings at other data centers and labs
VFD in Existing Data Center Without Containment? Wireless sensors to monitor air inlet temperatures Control CRAC unit fan speed Enclosure not always needed 36 temp sensors 6 VFD unit controllers
VFD without containment results IT Load: 165 kw Cooling capacity: 420 kw nominal (6 CRAH of 20 tons capacity each) No containment used Fan power 0.20 ->> 0.036 kw/kw of IT
Can you install VFD in DX CRAC units EPRI is conducting field demo with funding from California Energy Commission New systems designed with modulating compressor capacity along with fan speed Can you vary fan speed in CRACs with compressor on/off controls? Concerns/Issues: Condensate freeze up on cooling coil at low fan speeds Over dehumidification?
19.4% 26.6% 30.9% EPRI data center site in Palo Alto, CA 94 kw IT Load Two 30 tons CRAC units
Oracle s Utah Compute Facility (UCF) Very energy efficient Uses natural cooling of Utah ~$1.8 Millions utility incentives Oracle s Newest Green Data Center Copyright 2010, Oracle. All rights reserved. Oracle Confidential Cloud Computing
Oracle UCF Supercell Construction Outside air cooling will help achieve more power efficiency Facility be divided into four independent supercells Networking and power distribution gear will be separate from the supercells so computing power can be more concentrated within the data center. Copyright 2010, Oracle. All rights reserved. Oracle Confidential Cloud Computing
Energy use in data centers By ICT equipment CPU, memory chips; Power conversion device; Fans ICT equipment operations (e.g., virtualization, sleep mode, part load performance, etc.) Infrastructure Support Power Conditioning/UPS/ Distribution Cooling chips to outdoor air: Fans Circulation pumps Compressors/Chillers Cooling towers/fluid coolers
Peak design vs. economy mode Design for Peak Performance vs. Design for Efficiency/Economy Peak performance requirement <5% of the time Economy operation >95% of the time Do not ignore economy performance Reduce power use during economy performance; think of sleep mode. Load diversity helps to reduce total demand even if the individual peak demand is same
Cost/Budget constrained scenario Typical data center costs for each 1 kw of power provisioning: ~$10-25,000+ in infrastructure ~$15-25,000+ in IT hardware ~$15-25,000+ in software/apps Design more efficient IT hardware to move costs away from infrastructure so customer can buy more hardware and software!
Power constrained supply scenario For example, if only 1 kw power is available, what is your customer likely to buy for his growing business? 3 servers/it equipment of 333 Watts each? or 4 servers/it equipment of 250 Watts each? or 5 servers/it equipment of 200 Watts each? think what will make your company grow!
Example: Power constrained scenario with 10 MW available power Support Power to IT Power Ratio 1 to 1 0.7 to 1 0.50 to 1 0.33 to 1 0.25 to 1 Available IT/Server Capacity, MW Load 5 5.9 6.7 7.5 8.0 Support Equipment Capacity, MW 5 4.1 3.33 2.5 2.0 Total Power, MW 10 10 10 10 10 Extra Server Capacity Available, kw, compared to support power ratio of 1 to 1 0 0.9 1.67 2.5 3 % Increase in IT/server capacity 0% 18% 33% 50% 60% PUE 2.0 1.7 1.5 1.33 1.25
Data Center Cooling Chain: Seven degrees of separation/freedom from chip to outdoor air 1. IT equipment chip to air: Server/IT equipment fans 2. Remove hot air from room: CRAC/CRAH units fans 3. Room hot air to secondary fluid: pumps 4. Secondary fluids to primary fluid/chilled water: pumps 5. Primary fluid/chilled water to condenser water: chillers 6. Condenser water to cooling tower: pumps 7. Cooling tower to air: Cooling tower fans
Fan energy use, kwh/kwh of IT equipment energy use How much energy is used for airflow? Energy used for airflow in data centers can vary widely and wildly! 0.04 to 0.4 kwh per kwh of the IT equipment In overall PUE,.04 to 0.4 can be contributed from airflow alone Energy used for airflow turns into heat and must be removed by chiller/air conditioner even more energy use 0.5 0.4 0.3 0.2 0.1 0 1
Relative Power/Energy Use in Data Centers Heat Generating Equipment Chilled Water Cooling Plant Water Cooled DX Cooling Air Cooled DX Cooling Ref: EPRI
Differences in airflow for comfort and data centers Comfort conditioning technology is typically used in data centers Heat removed = 1.08 * CFM * (SAT RAT) End point: RAT Two independent parameters can be changed: CFM & SAT Well mixing of air in the space Data Centers requirements are different Heat removed = 1.08 * CFM * (SAT RAT) End points are: CFM and SAT Need to supply both, required CFM and SAT Who cares about RAT?
Other differences in comfort air conditioning vs. data centers Supply air temperatures Control Chiller temperature Cooling needs Cooling tower Comfort ~55-60 F Return Air ~45-50 F Seasonal Summer only Data Centers ~75-80 F Supply Air ~60-65 F Year round Summer and Winter
Opportunities ICT equipment operations Part load operation, sleep mode, Cooling chain equipment efficiency Attention to part load Airflow management Cooling (economizer-direct/indirect) Chillers Pumping systems auto adjust/minimize head pressure Heat rejection cold climate cooling towers, dry coolers Sharing of data between IT equipment and the Cooling/Power chains for efficient controls