Data Center Efficiency Metrics and Methods



Similar documents
Reducing Data Center Energy Consumption

Managing Power Usage with Energy Efficiency Metrics: The Available Me...

Data Centers: Definitions, Concepts and Concerns

Measuring Power in your Data Center

Overview of Green Energy Strategies and Techniques for Modern Data Centers

Guideline for Water and Energy Considerations During Federal Data Center Consolidations

DataCenter 2020: first results for energy-optimization at existing data centers

Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions

Measuring Power in your Data Center: The Roadmap to your PUE and Carbon Footprint

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Re Engineering to a "Green" Data Center, with Measurable ROI

Managing Data Centre Heat Issues

FEAR Model - Review Cell Module Block

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Raising Inlet Temperatures in the Data Centre

The Green Data Center

Metrics for Data Centre Efficiency

Managing Today s Data Centers Avoiding the Impending Crisis

Recommendations for Measuring and Reporting Overall Data Center Efficiency

THE GREEN GRID DATA CENTER POWER EFFICIENCY METRICS: PUE AND DCiE

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Data Center Industry Leaders Reach Agreement on Guiding Principles for Energy Efficiency Metrics

Measuring Energy Efficiency in a Data Center

Guidance for Calculation of Efficiency (PUE) in Data Centers

GREEN FIELD DATA CENTER DESIGN WATER COOLING FOR MAXIMUM EFFICIENCY. Shlomo Novotny, Vice President and Chief Technology Officer, Vette Corp.

Optimizing Power Distribution for High-Density Computing

Data Center Technology: Physical Infrastructure

The Pros and Cons of Modular Systems

Analysis of data centre cooling energy efficiency

Data Center Facility Basics

Data Center Energy Use, Metrics and Rating Systems

Airflow Simulation Solves Data Centre Cooling Problem

Guidance for Calculation of Efficiency (PUE) in Data Centers

How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems

Energy Savings in the Data Center Starts with Power Monitoring

Case Study: Innovative Energy Efficiency Approaches in NOAA s Environmental Security Computing Center in Fairmont, West Virginia

Recommendations for Measuring and Reporting Overall Data Center Efficiency

AIA Provider: Colorado Green Building Guild Provider Number: Speaker: Geoff Overland

Data Centers: How Does It Affect My Building s Energy Use and What Can I Do?

Allocating Data Center Energy Costs and Carbon to IT Users

Increasing Data Center Efficiency by Using Improved High Density Power Distribution

Electrical Efficiency Modeling for Data Centers

Greener Pastures for Your Data Center

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Control of Computer Room Air Handlers Using Wireless Sensors. Energy Efficient Data Center Demonstration Project

Data Center Equipment Power Trends

Calculating Total Power Requirements for Data Centers

U.S. Department of Energy, National Renewable Energy Laboratory, IT Sustainability Case Study

Greening Data Centers

Environmental Data Center Management and Monitoring

Expert guide to achieving data center efficiency How to build an optimal data center cooling system

A Comparison of AC and DC Power Distribution in the Data Center Transcript

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

Power Distribution Units (PDUs): Power monitoring and environmental monitoring to improve uptime and capacity planning

AIRAH Presentation April 30 th 2014

Datacenter Efficiency

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Data Centre Capacity and Electricity Use How the EU Code of Conduct can help your data centre electricity bill

Increasing Data Center Efficiency through Metering and Monitoring Power Usage

Energy Efficient Data Centre at Imperial College. M. Okan Kibaroglu IT Production Services Manager Imperial College London.

Bytes and BTUs: Holistic Approaches to Data Center Energy Efficiency. Steve Hammond NREL

Data centre energy efficiency metrics. Existing and proposed metrics to provide effective understanding and reporting of data centre energy

The Quest for Energy Efficiency. A White Paper from the experts in Business-Critical Continuity

Design Best Practices for Data Centers

The Efficient Enterprise. Juan Carlos Londoño Data Center Projects Engineer APC by Schneider Electric

Unleashing Stranded Power and Cooling Data

Data Centers That Deliver Better Results. Bring Your Building Together

Green Data Centre Design

Energy Recovery Systems for the Efficient Cooling of Data Centers using Absorption Chillers and Renewable Energy Resources

Data Center Power Consumption

How To Find A Sweet Spot Operating Temperature For A Data Center

Green Data Center Program

AbSTRACT INTRODUCTION background

Convergence of Complexity

Data Center Efficiency in the Scalable Enterprise

Utilizing Temperature Monitoring to Increase Datacenter Cooling Efficiency

Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers

GREEN GRID DATA CENTER POWER EFFICIENCY METRICS: PUE AND DCIE

Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc.

Energy Efficiency and Availability Management in Consolidated Data Centers

Greening Commercial Data Centres

Data Center Energy Efficiency

Qualitative Analysis of Power Distribution Configurations for Data Centers

Server Technology, Inc.

DATA CENTER COOLING INNOVATIVE COOLING TECHNOLOGIES FOR YOUR DATA CENTER

National Grid Your Partner in Energy Solutions

Power and Cooling for Ultra-High Density Racks and Blade Servers

High-Efficiency AC Power Distribution for Data Centers

ICT and the Green Data Centre

Metering and Managing University of Sheffield Data Centres. Chris Cartledge Independent Consultant

Legacy Data Centres Upgrading the cooling capabilities What are the options?

How To Improve Energy Efficiency In A Data Center

The Mission Critical Data Center Understand Complexity Improve Performance

IT Cost Savings Audit. Information Technology Report. Prepared for XYZ Inc. November 2009

The Different Types of Air Conditioning Equipment for IT Environments

Understanding Power Factor, Crest Factor, and Surge Factor. White Paper #17

EU Code of Conduct for Data Centre Efficiency

White Paper: Free Cooling and the Efficiency of your Data Centre

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

Transcription:

www.searchdatacenter.com Data Center Efficiency Metrics and Methods q q q q

Elements of Data Center Efficiency in effect, what a data center is and does can define what metric is used to decide what is efficient or inefficient. So how do we judge data center efficiency? Because the output of data centers is not a physical object per se, we have been forced to split our metrics into two distinct groups: computing efficiency and physical infrastructure efficiency. Ultimately, the ideal metric would include both. Computing efficiency has had a long history of metrics, almost since the first vacuum tube mainframe could add the proverbial 01+01. In addition, the speed of the sub-system hardware such as memory, I/O and storage has many of its own individual metrics, all of which are necessary to provide the actual system throughput. Computing systems as a whole have many different system and industry benchmarks. In addition, the nature of output is constantly changing and being redefined. In essence, the system is not just hardware that can be quantitatively measured but also the software as well as the different tasks that it performs. That can make it difficult to define and compare the useful work that a system produces. In October 2007, The Green Grid, a global consortium of IT companies and professionals seeking to improve energy efficiency in data centers, tried to address this question and introduced the following metric: Data Center Productivity = Useful Work/Total Facility Power This was part of its original PUE and DCiE metrics. Still, this is an open issue because the concept of what a data center s useful work is makes it a difficult if not impossible metric to calculate or to get any groups to agree on. This concept was also put forth in 2008 as Corporate Average Data Efficiency (CADE) by the Uptime Institute, a data center research, education and consulting 2 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

organization in New York, and management consulting firm McKinsey & Co.: CADE = Facility Efficiency x IT Asset Efficiency Facility Efficiency = Facility Energy Efficiency Percentage x Facility Utilization Percentage IT Asset Efficiency = IT Utilization Percentage x IT Energy Efficiency Percentage But neither CADE nor Data Center Productivity caught on the way the PUE standard has taken hold. That may be because the PUE metric is easier to implement, and it does not involve any of the complexity of a common agreement on what useful work, computing efficiency or IT asset efficiency is only direct infrastructure power measurements. Infrastructure efficiency is a relatively new area of interest. Until just recently, the concept of energy efficiency in data centers was a distant second to reliability and availability. The Green Grid and the PUE metric came into existence only three years ago. And although there are many critics of the PUE metric, it has helped the industry begin to address energy usage and efficiency. Fortunately, infrastructure energy efficiency is a relatively concrete area, whose components can be quantified and calculated: PUE = Total Facility Power/IT Equipment Power And its reciprocal, DCIE, is defined as: IT Equipment Power/Total Facility Power x 100% For example: PUE = (Total Facility Power) 200 KW/100 KW (IT Equipment Power) = 2.0 DCiE = (IT Equipment Power) 100 KW/200 KW (Total Facility Power) = 50% 3 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

PUE and DCiE are really the same metric, but PUE has become the more common reference. The major positive factor of this metric is its inherent simplicity, which allows it to be easily understood and, therefore, broadly accepted. One the problems with the PUE metric, though, is that it uses power, which is an instantaneous value so that claims of a favorable PUE can be made, but they may not represent a monthly or true annualized average of the data center s efficiency. Of course, once everyone became familiar with PUE, many sites and even some equipment manufacturers started posting how low their PUE is although equipment cannot have a PUE. Some manufacturers have even claimed that they are very close to the perfect PUE of 1.0, which would represent an infrastructure that uses no energy. In reality, most sites have PUE ranges from 1.5 to 2.5, if they properly take measurements according to The Green Grid s guidelines. As a result, there are now also references to an annualized PUE to more accurately reflect the overall efficiency. The ability to accurately monitor the total power to a data center facility in real time as opposed to waiting for the monthly utility bill requires that sensors be installed at the key measurement points. But to measure the energy usage, the electrical distribution panels will need to have a potential voltage transformer, as well as current transformers installed on each output circuit leading to a cooling component. If the entire panel is dedicated to cooling only, then you only need to monitor the main feed to obtain the total cooling energy reading. This is lower in cost and will allow you to calculate your PUE but does not provide detailed information to optimize individual areas or sub-systems. Although each type of cooling system is different, it is important to remember to include all the components not just the computer room air conditioning units (CRACs), such as chillers, external fan decks or pumps. Also make sure that the system you are considering is capable of measuring actual power KW not just KVA and can also record and display energy use over time and pro- 4 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

vide useful information analysis. It should also be able to do subsets of the information, such as cooling and IT load. And last, but not least, it should be able to calculate PUE. In fact, during the U.S. Environmental Protection Agency s sample study in as part of the Energy Star for Data Centers program, it became apparent that of the 120 sites that were involved, a majority were not capable of accurately monitoring the total facility load and the IT load. This was especially true for data centers that were part of a mixed-use building. In the end, the EPA decided to soften the measurement requirements and accept the UPS output instead of the actual IT load. Ideally, the measurement points should be more granular to allow for a more detailed analysis of where the power is going, such as down to the rack level or even the device level. This program has been more than two years in the making and originally involved monitoring more than 120 data centers, but only 61 became valid reference sites. Nonetheless, after 11 months of efficiency sampling, the PUE ranged from a best case of 1.36, a worst case of 3.6 and a median of 1.9. In addition to the EPA program, the U.S. Department of Energy has its Power vs. Energy ALTHOUGH MOST PEOPLE use the terms power and energy interchangeably, they are not the same. Power is an instantaneous measurement expressed in watts or kilowatts (KW) while energy is power over time expressed in kilowatt hours (KWH). The U.S. Environmental Protection Agency originally started to use the term energy usage efficiency (EUE), during the early part of the program and later decided to change it to power usage effectiveness, or PUE, because the industry had adopted PUE. However, although the EPA is calling it PUE in its final version of the program, it has intermixed terms and has redefined its version of PUE = Total Energy/IT Energy. 5 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

DC Pro Software Tool Suite, which includes a Profiling Tool and System Assessment Tools to perform energy assessments on data center systems. The EPA released the final official version of the first Energy Star for Data Centers program on June 7. Although this is still a voluntary standard, the data center industry should take heed and consider the inherent advantages of improving energy efficiency now, even if only lowering operating costs. Many vendors both start-ups as well as major vendors are offering energy monitoring and management packages. They can accept information that may already be available from different brands of equipment such as the UPS, and some can also integrate with existing building management systems. There are a variety of systems that offer a choice or combination of hardwired, networked or even wireless remote units for measuring temperature, humidity and power. The cost of the systems vary widely, based on the number of points and type of measurement, as well as the sophistication and features of the monitoring software package. Costs range from $5,000 to $10,000 for a basic system to between $50,000 and $100,000 or higher for a large-scale installation. Some vendors even offer hosted monitoring solutions for a monthly fee to lower the capital cost of entry. Although you may not have given any serious thought as to how data center efficiency may affect your existing data center s operation, it will force you to reconsider your existing operating practices. Additionally, it certainly will have an impact any future data center designs and the equipment that you purchase. Remember that efficiency has its own economic reward an improved bottom line. Keep in mind that a data center is a 24/7 operation, and IT loads operate year around. This equals 8,760 hours in a year. At 11.5 cents per KWH, each KW saved or wasted represents a $1,000 a year. If your data center operates at a PUE of 2.0, then each KW of IT load will result in a $2,000 annual cost. Ideally, the most efficient data center will be one whose hardware and software delivers the most useful work and is built with an infrastructure that will support the IT load with the lowest overhead. 6 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

SERVER Server Efficiency today s swirling market of green technologies has brought energy-saving technologies to the forefront of most any IT discussion. Although that s good news for savvy, energy-smart resellers, there is a dark side that presents a real challenge to resellers and their customers how to quantify energy use and report it in an actionable fashion. Some would be quick to say it is an easy problem to solve just measure the power used by a server in kilowatt hours, apply an hourly rate and get an answer. Seems simple enough? But there is much more to judging energy efficiencies than counting watt hours. Often forgotten are essential elements, such as CPU utilization, cooling costs, component costs and the effect of loads on the server s subsystems. Add to that often overlooked factors such as power supply efficiencies and rack-based component demands, and it becomes easy to see that there is a lot more to power efficiency than the electric company s energy delivery fees. JUSTIFYING A SERVER REFRESH Server efficiency has a dramatic impact on the PUE ratio, helping to justify the expenses associated with a server refresh, by demonstrating the impact on PUE. Proving server efficiency is dictated by loads. For example, a server that uses little power yet has no CPU utilization is far from efficient. To truly estimate server efficiency, one has to look at CPU utilization as a starting point. Many studies have shown that typical server CPU utilization proves to be less than 15% and, in most cases, as low as single digits. Those numbers can be vastly improved by turning to virtualization technologies, where CPU utilization is often increased to beyond 60%, thus increasing the inherent efficiency without requiring a server refresh. Although increased CPU utilization does lead to increased server efficiency, 7 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

SERVER other methodologies should not be discounted, and other measurements should be taken before finalizing on virtualization as the only way to affordably enhance efficiency and improve PUE. Processing power per rack and the power footprint of each rack should be incorporated into efficiency calculations. It all comes down to a matter of density where density is defined as the number of processor cores per rack. Here, the implementation of blade server technologies leveraging multiple-core CPUs can increase rack density to as many as 1,024 CPU cores per rack. That high density level has the potential to replace dozens of racks in a data center and reduce the power and cooling footprints of the facility. Server design is the primary catalyst for achieving densities of that magnitude. Achieving maximum efficiencies using heavily populated racks usually requires looking at the rack and the integrated components as a whole because high CPU densities dictate that the rack be designed to deliver the power, cooling and management needs of the installed blades. That said, physical server consolidation combined with virtualization technologies lead to the most efficient solution. Nevertheless, there are still many other factors that lead to quantifying server efficiency and consist of the technologies designed to reduce power consumption as well as cooling needs. A closer look at each of those technologies helps to quantify the impact a server refresh may have on server efficiencies. Those technologies include: D Energy Star Specifications: Servers that meet Energy Star requirements incorporate various technologies to reduce power consumption and usually have a savings percentage stated. D New CPUs: Many of the latest-generation CPUs incorporate technologies designed specifically for power consumption reduction such as multiple cores, throttling, voltage drop, integrated virtualization support, ability to shut down unused cores and across core load balancing. 8 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

SERVER D Power Supplies: New standards are increasing the efficiencies of power supplies from lows of 60% to more than 90% in the latest designs. D Fanless Designs: The elimination of cooling fans reduces power consumption. Fanless server designs use less energy and generate less heat. D Increased Density: More CPUs and more cores are being integrated into individual servers or server blades, delivering much more processing power without increased power consumption. To properly calculate the potential savings of a server refresh, each of those above elements must be considered, with the goal of reducing operational costs, which is what improved efficiencies should lead to. REDUCING OPERATING COSTS A 2007 study performed by Lawrence Berkeley National Laboratory showed that servers in a data center account for about 55% of the electricity costs. The remaining power is used by the supporting equipment. With that in mind, it is easy to see how servers affect power consumption and drive the power requirements of the supporting equipment. Simply put, server consolidation can deliver the largest payback over the shortest amount of time for most data centers. Consolidating can save about $560 per year per server. Using virtualization and newer blades, achieving server consolidations of 10 servers into one, can deliver substantial savings for even a small company and deliver a large opportunity for a solution provider in both hardware sales and services. Consolidation also cuts down on the amount of heat generated in data centers. But, like power consumption, it isn't just the servers that generate heat, it's also the equipment supporting the servers that add to the heat generation. 9 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

Energy efficiency should be a plan, not a response. Start planning today. Introducing APC interactive TradeOff Tools, Web-based calculators for predicting efficiency and availability in your data center planning The time has come to shift the efficiency debate from efficiency as a conceptual ideal to implementation the measuring of and planning for efficiency in your installation. Decisions you make today will certainly affect your efficiency tomorrow. Want to virtualize but unsure of how it will affect your power and cooling efficiency? Considering changes to your data center architecture approach or heat containment strategy, but reluctant to rock the boat? Balancing the pros and cons of data center planning can be a real challenge. The effects of a bad decision are further complicated by rising energy costs and the need for green. Introducing the latest innovation for critical decisions from APC : the TradeOff Tools suite. These interactive, online, and (most important) precisely formulated calculators deliver the actual implications of decisions in your data center design or planning. With TradeOff Tools, you can accurately calculate the impact that new equipment, server virtualization, design changes, and heat containment strategies will have on your facility. Because the data plugged in is your own data, and not industry averages, the results are your own, too. More than a widget The benefit of TradeOff Tools is easy to calculate. Now you can know, ahead of time, without question, how changes will affect your facility. There has never been a tool this easy to use, that provides information this actionable. And did we mention these great tools are FREE for a limited time? Saving money, energy, and time has never been so simple. Try out TradeOff Tools today and know before you go. APC s Web-based TradeOff Tools are the practical application of data center best practices. The tool s decision-making criteria and recommendations stem from customer feedback about actual installations and are founded upon the logic and research determined by the APC Science Center, the internationally renowned research team responsible for more than 100 white papers on the subject. Try online for the next 30 days for FREE! Visit www.apc.com/promo Key Code r299w Call 888-289-APCC x6207 Fax 401-788-2797 2010 Schneider Electric Industries SAS, All Rights Reserved. Schneider Electric, APC, and TradeOff Tools are owned by Schneider Electric, or its affiliated companies in the United States and other countries. e-mail: esupport@apc.com 132 Fairgrounds Road, West Kingston, RI 02892 USA 998-1432

COOLING Data Center Cooling Efficiency cooling is the second largest use of energy outside of the IT load itself. In fact, if your data center has a PUE of 2.2 or greater and many do it is highly likely that your cooling system may be using more power than the IT load. Consequently, cooling is the area that can offer the greatest opportunity for improvement. Until you have installed some form of energy monitoring and metering, however, it is impossible to determine if you have made any improvements. At a minimum, you will need to measure the energy to all the components of the cooling system. This is not as easy as it sounds because many installations do not have a centralized panel where all the cooling system components appear in one place so that you can measure the total. One of the most frequent cooling problems is an indirect byproduct of the latest trend in computing efficiency improvements virtualization and blade servers. The average blade server requires 4 KW to 5 KW of power some units as much as 8 KW and that produces much heat. Given today s cooling systems capacities and methodologies, consider the various power/heat density levels: Heat Load per Rack Low Density (Standard)...1KW to 3 KW Moderate Density...3 KW to 5 KW (one blade server) High Density (Threshold)...6 KW to 8 KW (two blade servers) High Density (Moderate)...9 KW to 14 KW (three blade servers) High Density (Very)...15 KW to 20 KW (four blade servers) High Density (Extreme)...20 KW and up 1 1 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

COOLING This has led to the development of many different methods for dealing with these high-density cooling loads. One of the growing trends is the increased use of high-density localized cooling. It comes in various forms such as in-row or overhead cooling and other forms of contained cooling, either at the aisle level or even the rack level. Generally this is considered closecoupled cooling. Some systems use Although not a necessarily a mainstream water to remove the heat from in-row cooling units, while others use a refrigerant gas. Still others use CO2 under solution, water or other fluids are far pressure, using phase change to handle more efficient than extreme density racks, claimed to handle up to 50 KW per rack. air to remove heat. Besides being able to effectively handle these high-density loads, close-coupled cooling is also more efficient because it requires much lower fan power to move the air 3 to 10 feet rather than the 15 to 30 feet for traditional raised-floor cooling. Many users are not comfortable with the concept of water being near computer equipment, but early mainframes used water that ran directly into the equipment cabinets. Coming full circle, IBM introduced the Hydrocluster in 2008, which runs the water directly into the server chassis and even into the CPU itself. Although not a necessarily a mainstream solution, water or other fluids are far more efficient than air to remove heat. In addition, water allows higher densities and less cooling energy. IBM has said it envisions that the heated water could be used to heat adjunct buildings or provide hot water for facilities. MORE CONVENTIONAL SOLUTIONS For those who want to improve their data center s cooling efficiency without going to these extremes, there are many more conventional solutions. One of 1 2 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

COOLING the most common problems is the mixing of hot and cold air from the hot and cold aisles. The addition of a containment system will mitigate this problem and improve efficiency. Another common problem is so-called bypass air within the cabinets themselves. A simple solution is the installation of blanking panels in the unused spaces in the racks to prevent the warm air from the rear being drawn forward within the cabinet. A method to improve cooling efficiency is The underlying to raise the air temperature from the traditional 68 to 70 degrees Fahrenheit to 75 to concept of using air 78 degrees Fahrenheit. In 2008, the American Society of Heating, Refrigerating and relatively simple economizers is Air-Conditioning Engineers (ASHRAE) just open the window recognized this and released its 9.9 standard in 2008, which stated that IT equip- when it is cooler outside than inside. ment could safely operate at up to 80.6 degrees Fahrenheit. It is imperative that the temperature is raised slowly and that all equipment intake air temperatures are constantly monitored to make sure that none exceed the maximum allowable temperature limits. FREE COOLING No discussion of cooling efficiency would be complete without including socalled free cooling, also known as economizers: D Air Side Economizers: The underlying concept of using air economizers is relatively simple just open the window when it is cooler outside than inside. However, the implementation in a data center is not quite that easy. The temperature and humidity must be controlled, and any sudden changes in either must be avoided. Air side economizers cannot just bring in cooler outside air and ex- 1 3 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

COOLING haust the warm air to reduce the mechanical cooling requirements, they also need to filter the outside air and provide a range of humidity control without expending more compressor energy than necessary. D Water Side Economizers: The more popular water side fluid cooler systems are incorporated with control valves into the warmed return side of the chilled water or glycol loop, which can then be re-routed through a fluid-to-air heat exchanger before coming back to the chiller plant. This allows partial or total free cooling by either reducing or totally eliminating the need for the chiller compressors to run on cold days. Some of the newer large data centers have begun to incorporate water recycling into their cooling systems. It should be noted that a significant portion of larger data centers use evaporative cooling systems. Although they are more energy-efficient than air cooled systems, they do use a significant amount of water. Some large data centers use millions of gallons per month, but in all The Green Grid and EPA metrics, the issue of water usage is ignored. Water shortages may become a critical factor in the coming years. Some of the newer large data centers have begun to incorporate water recycling into their cooling systems. Others are located where water is readily available and unconstrained at least for the present. Although the issue of water usage is not just a data center problem many energy-intensive industries also use huge amounts of water for cooling, including the utility power plants themselves it would seem disingenuous to speak of data center efficiency metrics without any mention of water usage. 1 4 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

DISTRIBUTION UPS and Power Distribution Efficiency conditioned and reliable power is taken for granted in data centers. Up until recently, the efficiency of an uninterruptible power supply was almost a side note in many purchasing decisions. With the rising cost of energy, the focus has now included efficiency when reviewing UPS potential purchases for new data centers. UPS has a range of specifications that are sometimes misinterpreted and confused with actual energy efficiency. In order to understand and calculate a UPS s true efficiency, let s examine each term and understand its inherent relationship with the others: D Voltage: Three phase/single phase voltage in the U.S. and North America is normally specified as 480/277V or 208/120V and 400/230V for Europe. Also important to note is that 400/230V is representational composite of various European and Asian systems, which include 380/220V 400/230V and 415/240V. In addition, there are also 600V systems, used in some Canadian and U.S. facilities. D KVA: Kilo Volt Amps, also known as apparent power. This represents volts x amps expressed in thousands. For example: For single phase only: 120V x 50A = 6,000 VA = 6 KVA. For three-phase power: 208V x 50A x 1.732 = 18,012 VA = 18 KVA (note: 1.732 is the square root of 3) D KW: Kilowatts, also known as actual power or heat value. This is calculated by multiplying KVA x power factor. For example, 18 KVA at 0.9 power factor = 16.2 KW 1 5 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

DISTRIBUTION D Power Factor: Although a simplified explanation, power factor (PF) represents the difference expressed as a decimal multiplier such as 0.9 between KVA, which is apparent power, and KW actual power, or heat value for non-linear reactive loads, such as computer power supplies, which are unlike pure resistive loads, such as incandescent bulbs or heaters. D Power Factor Corrected: Also known as PFC, power factor corrected represents a non-linear reactive device s ability to compensate and improve its power factor to approach that of a pure non-reactive load. For example, older computer power supplies typically had a 0.8 input power factor. This meant that the UPS had to provide the equipment with 1,000 VA to deliver 800 watts. Most modern computer power supplies are now power factor corrected at 0.95 or better, meaning that the UPS only needs to provide approximately 842 VA to deliver 800 watts. D Input Power Factor: This is important to the upstream equipment in the power chain. Like older computer power supplies, older UPSes were rated at 0.8 PF and required more apparent power (KVA) to deliver actual power (KW). For example, an older UPS 100 KVA with an input power factor of 0.8 would require approximately 125 KVA from the upstream equipment. A modern 100 KVA UPS with an Input Power Factor of 0.95 would only require approximately 105 KVA. D Output Power Factor: Most uninterruptible power supplies today are offered by different vendors with different output power factors, typically rated at 0.8, 0.9 and 1.0. This is a commonly misunderstood specification and represents the real net power rating of the UPS and how much actual power it can deliver to the IT load. For example, a 100 KVA UPS with a 0.8 output power factor can deliver only 80 KW. However, if it were rated at 0.9 PF, it could deliver 90 KW, and at 1.0 it could deliver 100KW to the IT load. As you can see, this is especially important because most IT loads today are power factor corrected at 1 6 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

DISTRIBUTION 0.95 or better. When evaluating different UPSes and their relative costs, consider the output power rated in KW, not in KVA. MORE ABOUT UPS energy efficiency can only be correctly calculated in KW not KVA. Do not be misled by anyone calculating in KVA or citing power factors as efficiency. It represents the amount of actual power delivered in KW of the UPS output, divided by the input power in KW. For example, a UPS with a 91 KW measured output with a 100KW actual input = 91% efficiency. This specification or measurement is usually based on the UPS operating with a fully charged battery. In effect, the other 9 KW are UPS losses and are converted to heat. Not all UPSes will report their input and output in KW some may only report KVA. To get a valid measurement a true power meter, which will measure KVA, KW and PF is required. The biggest gray area by UPS vendors that promote their efficiency claims is citing just a single number such as 91% when providing a UPS efficiency specification. Although the claim of 91% efficiency may be true, it is usually only valid at or near full load. This is not really how most uninterruptible power supplies operate. In many cases, a UPS will typically operate at 40% or less than its rated capacity. This is especially true for N+1 or 2N redundant UPS Installations. It is extremely important to get full disclosure of the efficiency curve over the entire load curve 10% to 100% from the manufacturer. Because the UPS is an essential component in the computing power chain, the EPA is in the process of establishing an Energy Star standard for the UPS at this time. Older UPSes have extremely poor efficiency when operating in their lower ranges. In some cases, they are only 60% to 70% efficient when at a 30% load. In contrast, a modern UPS will have 85% to 90% efficiency rating at only 30% 1 7 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

DISTRIBUTION of full load and 90% to 95% over the higher load range. Several manufacturers now offer modular UPS systems, which offer several advantages. They generally lower the initial capital costs because the UPS can be purchased with a lower initial capacity. It also allows the UPS to operate at a higher utilization factor, which allows for more efficient operation. If your UPS is five years old or older, it may pay to review its actual operating range and efficiency. The capital cost of a new UPS may have a fairly quick cost recovery if you can have an overall 15% to 20% gain in UPS efficiency. One of the other areas that can affect electrical efficiency is the voltage that is used to feed the UPS and to distribute to the floor level power distribution units (PDUs), as well as the IT equipment itself. Generally speaking, it is preferable to operate at the highest possible voltage, start- If your UPS is five years old or older, ing at the UPS. it may pay to review Also, it is more efficient, both from a its actual operating distribution point and from the IT equipment perspective, to operate at 208V instead of 120V. Virtually all modern power range and efficiency. supplies are universal and can operate from 100V to 250V, but they are 3% to 5% more efficient at 208V to 240V than at 120V. If possible, it is best to bring threephase 208V power to every rack because power densities are increasing, and it is less costly to install the extra two conductors initially than to rewire later. This will allow three times the power for very little extra up-front cost. A final note about the IT load itself: Check and inventory all the IT equipment. It has been shown repeatedly that there is a lot of equipment still plugged in and drawing power, but no one knows who owns it or what it does. The best way to improve data center efficiency is simply to turn off and remove all the equipment that no longer is needed or producing useful work. 1 8 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

ABOUT THE AUTHORS Julius Neudorfer is chief technical officer and a founding principal of Hawthorne, N.Y.-based North American Access Technologies Inc., which specializes in analyzing, designing, implementing and managing technology projects. He has designed and managed communications and data center projects for both commercial clients and government customers since 1987. Contact him at julius@naat.com. Frank J. Ohlhorst is an award-winning technology journalist, professional speaker and IT business consultant with more than 25 years of experience in the technology arena. Ohlhorst has worked with all major technologies and accomplished several high-end integration projects in a range of industries, including federal and local governments as well as Fortune 500 enterprises and small businesses. Contact him at fohlhorst@gmail.com. Cathleen Gagne Editorial Director cgagne@techtarget.com Matt Stansberry Executive Editor mstansberry@techtarget.com Christine Casatelli Editor ccasatelli@techtarget.com Marty Moore Copy Editor mmoore@techtarget.com Linda Koury Art Director of Digital Content lkoury@techtarget.com Jonathan Brown Publisher jebrown@techtarget.com Peter Larkin Senior Director of Sales plarkin@techtarget.com TechTarget 275 Grove Street, Newton, MA 02466 www.techtarget.com 2010 TechTarget Inc. No part of this publication may be transmitted or reproduced in any form or by any means without written permission from the publisher. For permissions or reprint information, contact Renee Cormier, Director of Product Management, Data Center Media, TechTarget (rcormier@techtarget.com). 1 9 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S

about our sponsor q Energy University Become a champion of Energy Efficiency. Enroll today. q APC Trade-Off Tools Simple, automated tools to support specific planning decisions. q Introducing the smarter Smart-UPS. About APC by Schneider Electric: APC by Schneider Electric, a global leader in critical power and cooling services, provides industry leading product, software and systems for home, office, data center and factory floor applications. APC delivers pioneering, energy efficient solutions for critical technology and industrial applications. 2 0 D A T A C E N T E R E F F I C I E N C Y M E T R I C S A N D M E T H O D S