Design and Operation of Energy-Efficient Data Centers
|
|
|
- Kristopher Lane
- 9 years ago
- Views:
Transcription
1 Design and Operation of Energy-Efficient Data Centers Rasmus Päivärinta Helsinki University of Technology Abstract Data centers are facilities containing many server computers. Their financial and social importance is greater than ever due to growing amount of data and hosted applications used by millions of users. Consecutively, maintaining an infrastructure with such massive computing resources is energy intensive. The total amount of energy used by servers around the world is greater than that of Finland. There is pressure to save energy in data centers because of the current needs to cut costs and reduce carbon emissions. In this paper, propsals for increased energy efficiency in computing facilities are made. The cooling system is analysed and concrete recommendations to save energy are presented. A typical power distribution scheme is explained and its effect to efficiency is studied. In addition, this paper shows that underutilisation causes serious losses and two approaches, namely virtualisation and power-aware request distribution, are introduced as solutions. KEYWORDS: data center, energy efficiency, virtualisation, blade server 1 Introduction As a result of several worldwide trends, the energy efficiency of data centers has become an important research topic. The amount of digital information has been growing rapidly and most data is stored and processed in data centers. However, the problem is that these buildings filled with server hardware consume significant amounts of electrical power. Server electricity use doubled between 2000 and 2005 [9]. At the same time, environmental values have become public and the price of electricity has increased [6]. Advances in energy efficiency lead to substantial cost savings and brand image improvements through environmental friendliness. In some cases, even the availability of a data center can improve as a result of better planned cooling solutions. This article gives insight into how data centers can be designed and operated in such a way that less electrical power is required. The rest of this article is organized as follows, beginning with a brief description of scientific methods applied during the research process in the second chapter. The fundamentals of data centers are summarized in third chapter. The fourth chapter introduces the most important sources of electrical power consumption in a data center environment. Concrete proposals for obtaining energy efficiency are analysed in the fifth chapter. Chapters six and seven propose further research topics and conclude the article, respectively. 2 Methodology This paper is based on an analysis of relevant scientific articles. I began my analysis by reading articles [12] and [9] which discuss the energy efficiency of computing at the macro level. Afterwards, I was convinced that the research question is important, and that the energy efficiency of data centers can have a serious impact both financially and environmentally. The main research question in this paper is how to improve energy efficiency by smart design and operation of computing facilities. The question is addressed very concretely in a 2007 paper by Greenberg et al. [8], and it has been used as a framework for finding the most important topics when writing this paper. However, [8] does not discuss anything very deeply and therefore I have analysed topics of cooling, power provisioning, blade servers and virtualisation in greater detail by examining [13], [7], [11] and [1]. A number of other articles have been referenced for additional insight and background information on the discussed topics. The complete listing can be found in the bibliography. 3 Description of a Data Center Data centers are dedicated rooms or even whole buildings containing very many computers. These computers are servers which provide applications for companies. Applications are often business and mission critical, meaning that downtime causes economical losses. High availability is achieved by means of redundant network connections and power supplies. All of the electrical applicances installed, such as network and server hardware, add to the total heat generation of a data center. Thus, it is clear that a reliable cooling solution is required to keep the servers up and running constantly. The smallest data centers are the size of not much larger than a single rack while the biggest spread over thousands of square metres. A rack is a standardised metal frame or enclosure which is 19 inches wide and usually 42 units (U) tall. Unit refers to the height of 1.75 inches and most commodity servers are 1U or 2U high. In a typical scenario, the server racks are placed on a raised floor area which allows the installation of cabling and flow of cool air under the hardware. According to Beck [3], on average, the raised floor area accounts for half of the total area while the computer equipment occupies 25-30% of the raised area.
2 A number of different ownership structures are common for data center operation, for example corporate, managed or co-located facilities are possible. In all cases because of the proprietary nature of the stored data, computing facilities are often well-secured also from physical threats such as burglary or fire. They can be located in old office, warehouse or industrial buildings, or a new building can be purpose-built for data center use. Only a minimal number of employees work in a typical data center. [3] 4 Metrics and Estimation The measures for data center energy efficiency analysis are not very well established. However, the most used unit is W/m 2, which is used with several measures. Computer power density refers to the power consumed by computers divided by the computer room area. In contrast, total computer room density refers to the power consumed by computers and all supporting infrastructure including power distribution units (PDU), uninterruptible power supplies (UPS), heating, ventilating, and air conditioning (HVAC) and lights divided by the computer room area. [4] Another useful metric regarding the energy efficiency of a data center is the proportion of computer power consumption of total power consumption. Greenberg et alḟound out in their study of 22 data centers that the proportion varied between 0.33 and They also state that a value around 0.85 is a realistic goal. Obviously, a higher value is better since it reveals that most power is used in computing instead of inefficient cooling. [8] Operation of data centers is an energy intensive business. However, capacity requests from utilities regarding the power needs of new data centers are often overestimated. According to a research report from the Renewable Energy Policy Project[3], overestimation arises from five factors. First, the nominal power need of appliances is often used as a basis for calculations eventhough research has shown that computer hardware draws only 20%-35% of its nominal power need. By nominal power need, the value printed on the nameplate of an appliance is meant. The second misleading assumption is that all servers are fully utilised all the time. For example, in practice it is often not the case that servers have every memory and hard disk slot filled. Third, racks are usually not fully filled, so estimating power need by viewing each rack as full gives misleading results. The same misleading assumption that resources are fully utilised applies to the physical space such as the raised floor area as well. Rarely is it the case that the whole area is fully populated with racks. The last misleading assumptions concern the balance of the system. This means that as a result of above mentioned assumptions, other equipment such as PDUs and computer room air conditioning units (CRAC) will be overestimated as well. In addition, engineers often apply a 10%-20% safety margin for required power. As a result of such overestimation, optimal design solutions are difficult to achieve and energy suppliers struggle to satisfy overestimated requirements. I feel that the most important metric in energy-efficient data center design is the ratio of computing power to electrical power. After all, it is the computing power that data centers are built for and the electrical power that is minimized. This ratio is not an easy measure to define precisely. Rivoire et al. introduce JouleSort benchmark, which nicely captures exactly that and it can be used to compare very different kinds of computing systems from mainframes to mobile devices [16]. In this context also, the ongoing switch to multicore processors is important. It is known that once chip multithreading (CMT) is utilised by software, the ratio of computing power to electrical power will improve substantially. As an example the dual-core AMD Opteron 275 has been found to be 1.8 times faster than the single-core AMD Opteron 248 but the former uses only 7% more power. [2] 5 Recommendations 5.1 Cooling Computer and network hardware generate excess heat that must be removed. Otherwise, the temperature in a data center would rise rapidly resulting in unreliable operation of the hardware. Several sources show that cooling infrastructure accounts for 20%-60% of the total power consumption. The percentage is so significant it can be concluded that optimizing the cooling solution is the single most important factor in the energy efficiency of a computing facility. [8] [3] [12] Air conditioning based solutions still dominate the market although liquid cooling has some superior characteristics over air cooling [8]. An air cooling solution is comprised of a central plant, air handling units (AHUs) and computer room air conditioning units (CRACs). A CRAC transfers heat from the computer room air to a chilled water loop. The central plant is required to transfer the heat out of the facility. AHUs can be used to affect air flows. [4] Running air conditioning systems at part load is very inefficient. Overestimation, as described before, often results in installation of excess cooling capacity. One method that can help size cooling infrastrucure correctly is to use computational fluid dynamics (CFD) modeling as Patel et al. suggest. [13] They show that numerical modeling can be used to design airflow and model temperature distribution in a computer room. CFD modeling can not address the challenge of changing configurations in server racks. Heat loads can vary dramatically due to phyciscal rearrangements. Varying computational load also has an effect on the temperature distribution. Therefore cooling systems that support varying capacity should be prefered in dynamic environments. Figure 1: Typical cold aisle - hot aisle layout [8] One central principle of air conditioning design is that the mixing of incoming cool air and hot air rejected from the equipment should be minimized. A typical solution is to install racks in such a way that cold air inlet sides face each
3 other. The result is a layout where each aisle between rack rows is either a cold aisle or a hot aisle as shown in Figure 1. Schmidt et al. [17] present a ratio β which can be computed in order to resolve local inefficiencies in the air flow. β = T inlet T rack, (1) where T inlet is the difference between the rack inlet air temperature and chilled air entry temperature, and T rack is the difference between rack inlet and outlet air temperatures. The denominator used is an average rack value but the nominator value in contrast is a local value. A β-value of 0 indicates that cold and hot air flows do not mix at that location. A value greater than one means that there is a selfheating loop so that rack outlet air flows back to rack inlet. In geographical locations where the outside temperature is less that 13 C for at least four months in a year, cooling efficiency can be considerably improved by taking advantage of the outside environment. It is no coincidence that Google decided to launch its newest data center in Finland. Efficiency improvements can be achieved by designing the central plant of a cooling system in such a way that water is circulated in mild outdoor conditions. Another way to capitalize on mild environment is to use air-side economisers which take advantage of the cool outside air in cooling the indoor air. [8] It should not be forgotten that often the easiest way to save energy is to decrease cooling and let the temperature increase a couple of degrees. 5.2 Power Distribution facility. Figure 2 is a simplified diagram of the power provisioning system in a data center with a total capacity of 1 MW. The top part represents how the system is connected to a high voltage feed from an energy provider. A transformer then converts the main feed down to 480 V. This main feed and a generator are connected to an automatic transfer switch (ATS). The ATS switches the input to the generator in case the main power feed fails. Continuing closer to servers, power is supplied via two independent media which are backed up by uninterruptible power supplies (UPS). Power for racks is distributed via power distribution units (PDU). All PDUs are paired with static transfer switches (STS) which make sure that a functional input feed is chosen. A PDU then transforms the voltage down to 220 V in Europe. Finally, power is provisioned for equipment power supplies in racks. [7] Generally losses occur at each time voltage is transformed or AC/DC conversions are done. Data center operators should look for greatest possible efficiencies in these applicances. Practically for every Watt that is saved in excess heat production, another Watt is saved in cooling. According to Calwell et al. [5] the efficiency of a typical server power supply at part load is 66% on average. By investing in advanced power supplies, the efficiency can be improved to more than 80%. There are often engine-driven generators which back up the UPS system in case the main input fails. Greenberg et al. [8] notice that generators have constant stand by losses caused by engine heaters. The function of an engine heater is to ensure rapid starting of a generator. As a result of continuous stand by, the heater typically uses more energy than the generator will produce during the lifetime of a data center. This energy loss can be minimized by simply lowering the temperature of the heater to 20 C. Slightly extended starting times of the generators should be taken into account at the ATS so that UPS batteries are used until the generators have properly warmed up. 5.3 Blade Servers Figure 2: Power distribution hierarchy [7] At this point I will introduce how power is actually provisioned to the computers. Power provisioning system is an important topic to understand because clearly all losses here should be minimized in an energy-efficient computing Blade or dense servers have become increasingly popular in data centers during the latest years. They offer higher density installations, better modularity, easier maintenance and cost savings compared to traditional rack servers. A blade system is a special type of server which is typically comprised of a 6U enclosure and 8, 10, 12 or 16 blade server modules. The enclosure provides a chassis, shared power supply units, shared cooling and a signal midplane for the blade modules. A power supply unit (PSU) is either directly connected to a facility power feed or indirectly to a PDU in the rack. The modules can be server computers, storage servers or interconnect modules. The interconnect module can be an Ethernet or InfiniBand switch and other modules connect to it via the signal midplane. [11] I did not find clear proof that blade systems themselves are more energy-efficient than other types of servers. However, there are some factors that make power savings in blade servers easier compared to traditional rack servers. Air flow can be optimized at the system level already by the designers of the product. Electric power can be saved if efficient
4 fans, whose rotational speeds can be adjusted according to the cooling requirement of the blade system, are installed. Moreover, power distribution in a blade system can be done in an energy-efficient way. PSUs convert AC power from the facility power feeds to a low-voltage DC power that is used by the internal components of a server. Each PSU has a load at which its operation is most energy-efficient. In a blade system, because the PSUs are shared between several servers, PSUs can be turned on and off in such a way that the most energy-efficient load is maintained on the units. 5.4 Virtualisation There is an ongoing trend towards heavy utilisation of virtualisation in data centers [15]. Virtualisation provides easier management of computing resources in terms of consolidation. Administrators have traditionally run only one critical service per server because of safety reasons and clear administration. One service per server is often too coarsegrained distribution of workloads leading to underutilisation of computing resources. Using virtualisation, administrators can run several operating system instances on the same hardware. This makes possible to run many critical services on a same server entirely isolated from each other. As a result, the utilisation rate of computing resources increases. How virtualisation relates to energy efficiency in data centers, is that a recent study shows that on average 30% of servers are idle [10]. Virtualised data centers generally utilise servers more efficiently leading to conclusion that same services are maintained with decreased energy budget. [15] 5.5 Power-Aware Request Distribution Current server computers consume more than half of their peak power even when they are idle. One way to tackle this inefficiency would be to shutdown idle servers. Rajamani et al. propose a novel approach for shutting down servers to optimise energy-efficieny on their article On Evaluating Request-Distribution Schemes for Saving Energy in Server Clusters. They discuss whether requests in a server environment could be distributed in such a way that the desired service level would be guaranteed while at the same time maximizing the amount of servers turned off. Such approach is called power-aware request distribution (PARD). It can be "characterized as minimizing cluster resource utilization for a particular workload, while meeting given qualityof-service (QoS) constraints". Parameters affecting a PARD scheme can be divided into system and workload factors. The most influential system factors are the cluster unit and its capacity, startup delay, shutdown delay, and the possibility of migrating connection between servers. Cluster unit is the smallest unit of computing resource that can be turned on and off independently and is typically a single server. Its capacity means the maximum load it can handle with acceptable QoS. The capacity would typically mean the number of users. In addition, the following workload factors are important to the scheme: the load profile, the rate of change in load relative to the current load, and the workload unit. The workload unit is the minimal service request which can be scheduled to a server, for example a single client connection. PARD is essentially implemented in a load balancer of a web server farm in the example system presented in the article by Rajamani et al. The load balancer computes the required number of servers in a given time and turns off others. Estimations of the number can take advantage of historical request data but the actual algorithms are outside the scope of this paper. However, a common constraint for PARD implemenations is N t L t,t+d /C, (2) where N t is the number of running servers, L t,t+d is the maximum number of simultaneous requests between current time and startup delay, and C is the capacity of a single server. In the simplest possible PARD algorithm a constant threshold is added to the current situation in order to be prepared for upcoming changes in request activity. The power savings using PARD is heavily dependant on the chosen algorithm and on the traffic pattern. Obviously, the request load must be inconstant for turning servers off to pay off. [14] 6 Further Work Liquid cooling has not been covered in this article. As power densities increase it will soon be the only appropriate cooling solution because of physically superior characteristics compared to air. Already now some efficient liquid cooling solutions are in the market but their features should be investigated in academic research. It has been proposed that most of the power provisioning system in a data center would use direct current directly. Energy efficiency and suitability of such approach should be analysed. Related to power provisioning, Greenberg et al. state that computing facilities would be suiteful for on-site power generation. [8] 7 Conclusions In this paper, I have studied the energy efficiency of data centers. This is an important research topic because the total energy consumption of servers around the world is more than 100 TWh in a year which is more than for example the total power consumption of Finland [9]. Clearly, this amount of energy costs considerably money and its production causes carbon emissions. Even percentually small optimizations in energy efficiency pay back quickly. Data center design is a complex matter because of the dynamic nature of hosted resources. However, taking energy efficiency into account early in the design is essential for obtaining satisfying results. The fourth chapter covered how common and easy it is to overestimate energy and cooling needs. Overestimation results in unefficient and poorly balanced system. In addition, it has been recommended that CFD-modeling is used in the design phase to optimize cooling. Studies show that in some data centers more energy is used into cooling than into running computer servers. With sim-
5 ple analysis, careful planning and known best-practices, the efficiency of cooling can be optimized. The fifth chapter covered the classical hot aisle - cold aisle layout which should be used as a basis for air conditioning. Also, free cooling from mild environment should be utilised, and there is a current trend for building data centers in cold climatic zones such as Finland. Power provisioning system is critical in obtaining energy efficiency. The number of of voltage conversions and AC/DC transforms should minimized because each one of them causes loss in the system. However, the loss can be reduced by investing in high quality products. Also related to the power provisioning system, it has been recommended that unnecessarily high standby temperature of generator heaters can be scaled down in order to save energy. Idle servers consume terawatt hours of energy unnecessarily every year. Two approaches to increase server utilisation rate were analysed, namely virtualisation and poweraware request distribution. Virtualisation is a technique to run several operating system instances on a single server. This makes it easier for administrators to fully utilise computing resources by safely running several applications on the same server. Power-aware request distribution is a proposal to predict load on a cluster and power down excess servers. References [1] K. Adams and O. Agesen. A comparison of software and hardware techniques for x86 virtualization. In ASPLOS-XII: Proceedings of the 12th international conference on Architectural support for programming languages and operating systems, pages 2 13, New York, NY, USA, ACM. [2] L. A. Barroso. The price of performance. Queue, 3(7):48 53, [3] F. Beck. Energy smart data centers: Applying energy efficient design and technology to the digital information sector. Technical report, Renewable Energy Policy Project, Washington, DC, USA, [4] M. Blazek, H. Chong, W. Loh, and J. G. Koomey. Data centers revisited: Assessment of the energy impact of retrofits and technology trends in a high-density computing facility. Journal of Infrastructure Systems, 10(3):98 104, [5] C. Calwell and A. Mansoor. Ac-dc server power supplies: making the leap to higher efficiency. Applied Power Electronics Conference and Exposition, 1: , [6] Eurostat. Electricity prices by type of user. portal/page?_pageid=1996, &_ dad=portal&_schema=portal&screen= detailref&language=en&product=ref_ TB_energy&root=REF_TB_energy/t_nrg/ t_nrg_price/tsier040, Feb 20, [7] X. Fan, W.-D. Weber, and L. A. Barroso. Power provisioning for a warehouse-sized computer. In ISCA 07: Proceedings of the 34th annual international symposium on Computer architecture, pages 13 23, New York, NY, USA, ACM. [8] S. Greenberg, E. Mills, W. Tschudi, P. Rumsey, and B. Myatt. Best practices for data centers: Results from benchmarking 22 data centers. In Proc. of ACEEE Summer Study on Energy Efficiency in Buildings, [9] J. G. Koomey. Estimating total power consumption by servers in the u.s. and the world. Technical report, Lawrence Berkeley National Laboratory, [10] D. Kusic, J. O. Kephart, J. E. Hanson, N. Kandasamy, and G. Jiang. Power and performance management of virtualized computing environments via lookahead control. In ICAC 08: Proceedings of the 2008 International Conference on Autonomic Computing, pages 3 12, Washington, DC, USA, IEEE Computer Society. [11] K. Leigh, P. Ranganathan, and J. Subhlok. Generalpurpose blade infrastructure for configurable system architectures. Distrib. Parallel Databases, 22(2-3): , [12] J. D. Mitchell-jackson, D. M. K. Date, J. G. K. Date, K. B. Date, and J. Mitchell-jackson. Energy needs in an internet economy: a closer look at data centers. Technical report, University of California at Berkeley, [13] R. D. Patel, R. Sharma, C. E. Bash, and A. Beitelmal. Thermal considerations in cooling large scale high compute density data centers. In 8th Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems, pages , [14] C. Rajamani, K. Lefurgy. On evaluating requestdistribution schemes for saving energy in server clusters. IEEE International Symposium on Performance Analysis of Systems and Software, pages , [15] P. Ranganathan and N. Jouppi. Enterprise it trends and implications for architecture research. In HPCA 05: Proceedings of the 11th International Symposium on High-Performance Computer Architecture, pages , Washington, DC, USA, IEEE Computer Society. [16] S. Rivoire, M. A. Shah, P. Ranganathan, and C. Kozyrakis. Joulesort: a balanced energy-efficiency benchmark. In SIGMOD 07: Proceedings of the 2007 ACM SIGMOD international conference on Management of data, pages , New York, NY, USA, ACM. [17] R. R. Schmidt, E. E. Cruz, and M. K. Iyengar. Challenges of data center thermal management. IBM J. Res. Dev., 49(4/5): , 2005.
Increasing Energ y Efficiency In Data Centers
The following article was published in ASHRAE Journal, December 2007. Copyright 2007 American Society of Heating, Refrigerating and Air- Conditioning Engineers, Inc. It is presented for educational purposes
Reducing Data Center Energy Consumption
Reducing Data Center Energy Consumption By John Judge, Member ASHRAE; Jack Pouchet, Anand Ekbote, and Sachin Dixit Rising data center energy consumption and increasing energy costs have combined to elevate
High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.
High Density Data Centers Fraught with Peril Richard A. Greco, Principal EYP Mission Critical Facilities, Inc. Microprocessors Trends Reprinted with the permission of The Uptime Institute from a white
Challenges In Intelligent Management Of Power And Cooling Towards Sustainable Data Centre
Challenges In Intelligent Management Of Power And Cooling Towards Sustainable Data Centre S. Luong 1*, K. Liu 2, James Robey 3 1 Technologies for Sustainable Built Environments, University of Reading,
AEGIS DATA CENTER SERVICES POWER AND COOLING ANALYSIS SERVICE SUMMARY
AEGIS DATA CENTER SERVICES POWER AND COOLING ANALYSIS SERVICE SUMMARY The Aegis Services Power and Assessment Service provides an assessment and analysis of your data center facility and critical physical
7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America
7 Best Practices for Increasing Efficiency, Availability and Capacity XXXX XXXXXXXX Liebert North America Emerson Network Power: The global leader in enabling Business-Critical Continuity Automatic Transfer
Strategies for Deploying Blade Servers in Existing Data Centers
Strategies for Deploying Blade Servers in Existing Data Centers By Neil Rasmussen White Paper #125 Revision 1 Executive Summary When blade servers are densely packed, they can exceed the power and cooling
Datacenter Power Delivery Architectures : Efficiency and Annual Operating Costs
Datacenter Power Delivery Architectures : Efficiency and Annual Operating Costs Paul Yeaman, V I Chip Inc. Presented at the Darnell Digital Power Forum September 2007 Abstract An increasing focus on datacenter
Data Center Energy Efficiency. SC07 Birds of a Feather November, 2007 Bill Tschudi [email protected]
Data Center Energy Efficiency SC07 Birds of a Feather November, 2007 Bill Tschudi [email protected] Benchmark results helped to find best practices The ratio of IT equipment power to the total (or its
Green Computing: Datacentres
Green Computing: Datacentres Simin Nadjm-Tehrani Department of Computer and Information Science (IDA) Linköping University Sweden Many thanks to Jordi Cucurull For earlier versions of this course material
BRUNS-PAK Presents MARK S. EVANKO, Principal
BRUNS-PAK Presents MARK S. EVANKO, Principal Data Centers of the Future and the Impact of High Density Computing on Facility Infrastructures - Trends, Air-Flow, Green/LEED, Cost, and Schedule Considerations
Managing Data Center Power and Cooling
White PAPER Managing Data Center Power and Cooling Introduction: Crisis in Power and Cooling As server microprocessors become more powerful in accordance with Moore s Law, they also consume more power
Energy Efficient High-tech Buildings
Energy Efficient High-tech Buildings Can anything be done to improve Data Center and Cleanroom energy efficiency? Bill Tschudi Lawrence Berkeley National Laboratory [email protected] Acknowledgements California
CarbonDecisions. The green data centre. Why becoming a green data centre makes good business sense
CarbonDecisions The green data centre Why becoming a green data centre makes good business sense Contents What is a green data centre? Why being a green data centre makes good business sense 5 steps to
Last time. Data Center as a Computer. Today. Data Center Construction (and management)
Last time Data Center Construction (and management) Johan Tordsson Department of Computing Science 1. Common (Web) application architectures N-tier applications Load Balancers Application Servers Databases
The Benefits of Supply Air Temperature Control in the Data Centre
Executive Summary: Controlling the temperature in a data centre is critical to achieving maximum uptime and efficiency, but is it being controlled in the correct place? Whilst data centre layouts have
Electrical Efficiency Modeling for Data Centers
Electrical Efficiency Modeling for Data Centers By Neil Rasmussen White Paper #113 Revision 1 Executive Summary Conventional models for estimating electrical efficiency of data centers are grossly inaccurate
Improving Data Center Efficiency with Rack or Row Cooling Devices:
Improving Data Center Efficiency with Rack or Row Cooling Devices: Results of Chill-Off 2 Comparative Testing Introduction In new data center designs, capacity provisioning for ever-higher power densities
Energy Management Services
February, 2012 Energy Management Services Data centers & IT environments are often critical to today s businesses, directly affecting operations and profitability. General & Mechanical Services has the
Green Data Center and Virtualization Reducing Power by Up to 50% Pages 10
Green Data Center and Virtualization Reducing Power by Up to 50% Pages 10 Issue August 2007 Contents Executive Summary 2 Blade for Blade, Power is the Same 3 PAN Lowers Power by Reducing Server/CPU Count
Airflow Simulation Solves Data Centre Cooling Problem
Airflow Simulation Solves Data Centre Cooling Problem The owner s initial design for a data centre in China utilized 40 equipment racks filled with blade servers spread out in three rows along the length
A Comparison of AC and DC Power Distribution in the Data Center Transcript
A Comparison of AC and DC Power Distribution in the Data Center Transcript Slide 1 Welcome to the Data Center University course on the A Comparison of AC and DC Power Distribution in the Data Center. Slide
How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions
Intel Intelligent Power Management Intel How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions Power and cooling savings through the use of Intel
Guideline for Water and Energy Considerations During Federal Data Center Consolidations
Guideline for Water and Energy Considerations During Federal Data Center Consolidations Prepared for the U.S. Department of Energy Federal Energy Management Program By Lawrence Berkeley National Laboratory
Green Computing: Datacentres
Green Computing: Datacentres Simin Nadjm-Tehrani Department of Computer and Information Science (IDA) Linköping University Sweden Many thanks to Jordi Cucurull For earlier versions of this course material
How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions
Intel Intelligent Power Management Intel How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions Power savings through the use of Intel s intelligent
Data Center Technology: Physical Infrastructure
Data Center Technology: Physical Infrastructure IT Trends Affecting New Technologies and Energy Efficiency Imperatives in the Data Center Hisham Elzahhar Regional Enterprise & System Manager, Schneider
IT@Intel. Thermal Storage System Provides Emergency Data Center Cooling
White Paper Intel Information Technology Computer Manufacturing Thermal Management Thermal Storage System Provides Emergency Data Center Cooling Intel IT implemented a low-cost thermal storage system that
Datacenter Efficiency
EXECUTIVE STRATEGY BRIEF Operating highly-efficient datacenters is imperative as more consumers and companies move to a cloud computing environment. With high energy costs and pressure to reduce carbon
Best Practices. for the EU Code of Conduct on Data Centres. Version 1.0.0 First Release Release Public
Best Practices for the EU Code of Conduct on Data Centres Version 1.0.0 First Release Release Public 1 Introduction This document is a companion to the EU Code of Conduct on Data Centres v0.9. This document
abstract about the GREEn GRiD
Guidelines for Energy-Efficient Datacenters february 16, 2007 white paper 1 Abstract In this paper, The Green Grid provides a framework for improving the energy efficiency of both new and existing datacenters.
Enabling an agile Data Centre in a (Fr)agile market
Enabling an agile Data Centre in a (Fr)agile market Phil Dodsworth Director, Data Centre Solutions 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without
Statement Of Work. Data Center Power and Cooling Assessment. Service. Professional Services. Table of Contents. 1.0 Executive Summary
Statement Of Work Professional Services Data Center Power and Cooling Assessment Data Center Power and Cooling Assessment Service 1.0 Executive Summary Table of Contents 1.0 Executive Summary 2.0 Features
Server Room Thermal Assessment
PREPARED FOR CUSTOMER Server Room Thermal Assessment Analysis of Server Room COMMERCIAL IN CONFIDENCE MAY 2011 Contents 1 Document Information... 3 2 Executive Summary... 4 2.1 Recommendation Summary...
Data Center 2020: Delivering high density in the Data Center; efficiently and reliably
Data Center 2020: Delivering high density in the Data Center; efficiently and reliably March 2011 Powered by Data Center 2020: Delivering high density in the Data Center; efficiently and reliably Review:
Data Center Facility Basics
Data Center Facility Basics Ofer Lior, Spring 2015 Challenges in Modern Data Centers Management, Spring 2015 1 Information provided in these slides is for educational purposes only Challenges in Modern
Energy Efficient Data Centre at Imperial College. M. Okan Kibaroglu IT Production Services Manager Imperial College London.
Energy Efficient Data Centre at Imperial College M. Okan Kibaroglu IT Production Services Manager Imperial College London 3 March 2009 Contents Recognising the Wider Issue Role of IT / Actions at Imperial
Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers
Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers Prepared for the U.S. Department of Energy s Federal Energy Management Program Prepared By Lawrence Berkeley National
An Introduction to Cold Aisle Containment Systems in the Data Centre
An Introduction to Cold Aisle Containment Systems in the Data Centre White Paper October 2010 By Zac Potts MEng Mechanical Engineer Sudlows October 2010 An Introduction to Cold Aisle Containment Systems
Data center upgrade proposal. (phase one)
Data center upgrade proposal (phase one) Executive Summary Great Lakes began a recent dialogue with a customer regarding current operations and the potential for performance improvement within the The
Server Platform Optimized for Data Centers
Platform Optimized for Data Centers Franz-Josef Bathe Toshio Sugimoto Hideaki Maeda Teruhisa Taji Fujitsu began developing its industry-standard server series in the early 1990s under the name FM server
Data Centre Testing and Commissioning
Data Centre Testing and Commissioning What is Testing and Commissioning? Commissioning provides a systematic and rigorous set of tests tailored to suit the specific design. It is a process designed to
Measuring Power in your Data Center: The Roadmap to your PUE and Carbon Footprint
Measuring Power in your Data Center: The Roadmap to your PUE and Carbon Footprint Energy usage in the data center Electricity Transformer/ UPS Air 10% Movement 12% Cooling 25% Lighting, etc. 3% IT Equipment
Data Center Energy Consumption
Data Center Energy Consumption The digital revolution is here, and data is taking over. Human existence is being condensed, chronicled, and calculated, one bit at a time, in our servers and tapes. From
DataCenter 2020: first results for energy-optimization at existing data centers
DataCenter : first results for energy-optimization at existing data centers July Powered by WHITE PAPER: DataCenter DataCenter : first results for energy-optimization at existing data centers Introduction
How To Improve Energy Efficiency In A Data Center
Google s Green Data Centers: Network POP Case Study Table of Contents Introduction... 2 Best practices: Measuring. performance, optimizing air flow,. and turning up the thermostat... 2...Best Practice
Using Simulation to Improve Data Center Efficiency
A WHITE PAPER FROM FUTURE FACILITIES INCORPORATED Using Simulation to Improve Data Center Efficiency Cooling Path Management for maximizing cooling system efficiency without sacrificing equipment resilience
Rittal Liquid Cooling Series
Rittal Liquid Cooling Series by Herb Villa White Paper 04 Copyright 2006 All rights reserved. Rittal GmbH & Co. KG Auf dem Stützelberg D-35745 Herborn Phone +49(0)2772 / 505-0 Fax +49(0)2772/505-2319 www.rittal.de
GREEN FIELD DATA CENTER DESIGN WATER COOLING FOR MAXIMUM EFFICIENCY. Shlomo Novotny, Vice President and Chief Technology Officer, Vette Corp.
GREEN FIELD DATA CENTER DESIGN WATER COOLING FOR MAXIMUM EFFICIENCY Shlomo Novotny, Vice President and Chief Technology Officer, Vette Corp. Overview Data centers are an ever growing part of our economy.
Design Best Practices for Data Centers
Tuesday, 22 September 2009 Design Best Practices for Data Centers Written by Mark Welte Tuesday, 22 September 2009 The data center industry is going through revolutionary changes, due to changing market
Effect of Rack Server Population on Temperatures in Data Centers
Effect of Rack Server Population on Temperatures in Data Centers Rajat Ghosh, Vikneshan Sundaralingam, Yogendra Joshi G.W. Woodruff School of Mechanical Engineering Georgia Institute of Technology, Atlanta,
HPC TCO: Cooling and Computer Room Efficiency
HPC TCO: Cooling and Computer Room Efficiency 1 Route Plan Motivation (Why do we care?) HPC Building Blocks: Compuer Hardware (What s inside my dataroom? What needs to be cooled?) HPC Building Blocks:
Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009
Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009 Agenda Overview - Network Critical Physical Infrastructure Cooling issues in the Server Room
Re Engineering to a "Green" Data Center, with Measurable ROI
Re Engineering to a "Green" Data Center, with Measurable ROI Alan Mamane CEO and Founder Agenda Data Center Energy Trends Benchmarking Efficiency Systematic Approach to Improve Energy Efficiency Best Practices
2014 Best Practices. The EU Code of Conduct on Data Centres
2014 Best Practices The EU Code of Conduct on Data Centres 1 Document Information 1.1 Version History Version 1 Description Version Updates Date 5.0.1 2014 Review draft Comments from 2013 stakeholders
Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings
WHITE PAPER Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings By Lars Strong, P.E., Upsite Technologies, Inc. Kenneth G. Brill, Upsite Technologies, Inc. 505.798.0200
Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions
Free Cooling in Data Centers John Speck, RCDD, DCDC JFC Solutions Why this topic Many data center projects or retrofits do not have a comprehensive analyses of systems power consumption completed in the
Introducing Computational Fluid Dynamics Virtual Facility 6SigmaDC
IT Infrastructure Services Ltd Holborn Gate, 330 High Holborn, London, WC1V 7QT Telephone: +44 (0)20 7849 6848 Fax: +44 (0)20 7203 6701 Email: [email protected] www.itisltd.com Introducing Computational
CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers
CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW Dell PowerEdge M-Series Blade Servers Simplifying IT The Dell PowerEdge M-Series blade servers address the challenges of an evolving IT environment by delivering
Calculating Total Power Requirements for Data Centers
Calculating Total Power Requirements for Data Centers By Richard Sawyer White Paper #3 Executive Summary Part of data center planning and design is to align the power and cooling requirements of the IT
Best Practices for Wire-free Environmental Monitoring in the Data Center
White Paper 11800 Ridge Parkway Broomfiled, CO 80021 1-800-638-2638 http://www.42u.com [email protected] Best Practices for Wire-free Environmental Monitoring in the Data Center Introduction Monitoring for
Reducing Data Center Loads for a Large-Scale, Net Zero Office Building
rsed Energy Efficiency & Renewable Energy FEDERAL ENERGY MANAGEMENT PROGRAM Reducing Data Center Loads for a Large-Scale, Net Zero Office Building Energy Efficiency & Renewable Energy Executive summary
Measure Server delta- T using AUDIT- BUDDY
Measure Server delta- T using AUDIT- BUDDY The ideal tool to facilitate data driven airflow management Executive Summary : In many of today s data centers, a significant amount of cold air is wasted because
Cost Model for Planning, Development and Operation of a Data Center
Cost Model for Planning, Development and Operation of a Data Center Chandrakant D. Patel, Amip J. Shah 1 Internet Systems and Storage Laboratory HP Laboratories Palo Alto HPL-005-107(R.1) June 9, 005*
Data Sheet FUJITSU Server PRIMERGY CX400 M1 Multi-Node Server Enclosure
Data Sheet FUJITSU Server PRIMERGY CX400 M1 Multi-Node Server Enclosure Data Sheet FUJITSU Server PRIMERGY CX400 M1 Multi-Node Server Enclosure Scale-Out Smart for HPC, Cloud and Hyper-Converged Computing
Element D Services Heating, Ventilating, and Air Conditioning
PART 1 - GENERAL 1.01 OVERVIEW A. This section supplements Design Guideline Element D3041 on air handling distribution with specific criteria for projects involving design of a Data Center spaces B. Refer
11 Top Tips for Energy-Efficient Data Center Design and Operation
11 Top Tips for Energy-Efficient Data Center Design and Operation A High-Level How To Guide M e c h a n i c a l a n a l y s i s W h i t e P a p e r w w w. m e n t o r. c o m When is Data Center Thermal
Green Data Centre: Is There Such A Thing? Dr. T. C. Tan Distinguished Member, CommScope Labs
Green Data Centre: Is There Such A Thing? Dr. T. C. Tan Distinguished Member, CommScope Labs Topics The Why? The How? Conclusion The Why? Drivers for Green IT According to Forrester Research, IT accounts
Data Centre Energy Efficiency Operating for Optimisation Robert M Pe / Sept. 20, 2012 National Energy Efficiency Conference Singapore
Data Centre Energy Efficiency Operating for Optimisation Robert M Pe / Sept. 20, 2012 National Energy Efficiency Conference Singapore Introduction Agenda Introduction Overview of Data Centres DC Operational
Dealing with Thermal Issues in Data Center Universal Aisle Containment
Dealing with Thermal Issues in Data Center Universal Aisle Containment Daniele Tordin BICSI RCDD Technical System Engineer - Panduit Europe [email protected] AGENDA Business Drivers Challenges
How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems
How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems Paul Mathew, Ph.D., Staff Scientist Steve Greenberg, P.E., Energy Management Engineer
Power Efficiency Metrics for the Top500. Shoaib Kamil and John Shalf CRD/NERSC Lawrence Berkeley National Lab
Power Efficiency Metrics for the Top500 Shoaib Kamil and John Shalf CRD/NERSC Lawrence Berkeley National Lab Power for Single Processors HPC Concurrency on the Rise Total # of Processors in Top15 350000
Analysis of data centre cooling energy efficiency
Analysis of data centre cooling energy efficiency An analysis of the distribution of energy overheads in the data centre and the relationship between economiser hours and chiller efficiency Liam Newcombe
How To Use Rittal'S Rizone
RiZone Data Centre Infrastructure Management Enclosures Power Distribution Climate Control IT INFRASTRUcTURe SOFTWARE & SERVICEs RiZone Data Centre Infrastructure Management What can the RiZone Data Centre
Optimizing Power Distribution for High-Density Computing
Optimizing Power Distribution for High-Density Computing Choosing the right power distribution units for today and preparing for the future By Michael Camesano Product Manager Eaton Corporation Executive
Case Study: Innovative Energy Efficiency Approaches in NOAA s Environmental Security Computing Center in Fairmont, West Virginia
Case Study: Innovative Energy Efficiency Approaches in NOAA s Environmental Security Computing Center in Fairmont, West Virginia Prepared for the U.S. Department of Energy s Federal Energy Management Program
DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences
DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences November 2011 Powered by DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no
White Paper Rack climate control in data centres
White Paper Rack climate control in data centres Contents Contents...2 List of illustrations... 3 Executive summary...4 Introduction...5 Objectives and requirements...6 Room climate control with the CRAC
Energy Efficiency and Availability Management in Consolidated Data Centers
Energy Efficiency and Availability Management in Consolidated Data Centers Abstract The Federal Data Center Consolidation Initiative (FDCCI) was driven by the recognition that growth in the number of Federal
The Different Types of Air Conditioning Equipment for IT Environments
The Different Types of Air Conditioning Equipment for IT Environments By Tony Evans White Paper #59 Executive Summary Cooling equipment for an IT environment can be implemented in 10 basic configurations.
2014 Best Practices. for the EU Code of Conduct on Data Centres
EUROPEAN COMMISSION DIRECTORATE-GENERAL JRC JOINT RESEARCH CENTRE Institute for Energy and Transport Renewable Energies Unit 2014 Best Practices for the EU Code of Conduct on Data Centres 1 Document Information
- White Paper - Data Centre Cooling. Best Practice
- White Paper - Data Centre Cooling Best Practice Release 2, April 2008 Contents INTRODUCTION... 3 1. AIR FLOW LEAKAGE... 3 2. PERFORATED TILES: NUMBER AND OPENING FACTOR... 4 3. PERFORATED TILES: WITH
Office of the Government Chief Information Officer. Green Data Centre Practices
Office of the Government Chief Information Officer Green Data Centre Practices Version : 2.0 April 2013 The Government of the Hong Kong Special Administrative Region The contents of this document remain
Motherboard- based Servers versus ATCA- based Servers
Motherboard- based Servers versus ATCA- based Servers Summary: A comparison of costs, features and applicability for telecom application hosting After many years of struggling for market acceptance, it
Data Center & IT Infrastructure Optimization. Trends & Best Practices. Mickey Iqbal - IBM Distinguished Engineer. IBM Global Technology Services
Data Center & IT Infrastructure Optimization Trends & Best Practices Mickey Iqbal - IBM Distinguished Engineer IBM Global Technology Services IT Organizations are Challenged by a Set of Operational Issues
Power Management in the Cisco Unified Computing System: An Integrated Approach
Power Management in the Cisco Unified Computing System: An Integrated Approach What You Will Learn During the past decade, power and cooling have gone from being afterthoughts to core concerns in data
Data Center Consolidation Trends & Solutions. Bob Miller Vice President, Global Solutions Sales Emerson Network Power / Liebert
Data Center Consolidation Trends & Solutions Bob Miller Vice President, Global Solutions Sales Emerson Network Power / Liebert Agenda Traditional Data Centers Drivers of a Changing Environment Alternate
Australian Government Data Centre Strategy 2010-2025
Australian Government Data Centre Strategy 2010-2025 Better Practice Guide: Data Centre Power June 2013 17/06/2013 4:12 PM 17/06/2013 4:12 PM Contents Contents 2 1. Introduction 3 Scope 3 Policy Framework
