ASSESSMENT OF EPA MID-TIER DATA CENTER AT POTOMAC YARD

Size: px
Start display at page:

Download "ASSESSMENT OF EPA MID-TIER DATA CENTER AT POTOMAC YARD"

Transcription

1 ASSESSMENT OF EPA MID-TIER DATA CENTER AT POTOMAC YARD CONTRIBUTORS BRUCE LONG, APC BY SCHNEIDER ELECTRIC JAMES FREEMAN, EPA DR. MICHAEL PATTERSON, INTEL

2 PAGE 2 EXECUTIVE SUMMARY In April of 2008, The Green Grid signed a Memorandum of Understanding (MOU) with the Environmental Protection Agency (EPA) to promote energy effi ciency in small to mid-sized data centers. This effort promotes the innovative efforts of the Information Technology (IT) industry and EPA to facilitate the improvements in the energy effi ciency of computing facilities. According to the Environmental Protection Agency, data centers across the United States accounted for 1.5 percent of total US electricity demand in 2006 equivalent to the annual electric consumption of the state of Florida and have become one of the fastest growing users of energy. While the power consumed at these individual data centers may be small, they are numerous at EPA and other large organizations. IDC estimates that in the US there are 3.6 million servers in closets and rooms, 3.2 million mid-tier servers, and roughly 3.1 million servers in the enterprise-class space. Therefore, this report is relevant to roughly one-third of the servers in the United States. The following is the initial report documenting fi ndings and making recommendations around a typical EPA mid-tier data center in the Washington D.C. area: One Potomac Yard (OPY). The Green Grid Assessment found that OPY is currently underutilized and has a power usage effectiveness (PUE) of approximately 2.9. Total power for the data center is estimated to be 132 kw, with approximately 45 kw used by the IT equipment, and 87 kw consumed by cooling, power distribution, and lighting. OPY s density is currently less than 25 W/ sq ft. There are a number of specifi c cooling system design issues that if addressed could increase cooling system effi ciency at OPY. Recommendations made in the report cover a wide range of specifi c and programmatic opportunities for midtier data centers across the US. These include: Develop training materials for installation and procurement of IT equipment and infrastructure Develop a tighter process for IT procurement and installation, which ensures that power management features are turned on and performance data (IT and power/thermal) is collected and trended Develop a tighter process for maintenance and documentation of existing equipment and operations Investigate virtualization and greater control and sharing of IT assets. This should include, as a fi rst step, a formal virtualization/consolidation study based on server utilization Investigate consolidation of other computer rooms and IT resources Ultimately, the goal of this partnership is to share best practices for replication with other governmental agencies and industry stakeholders. Future partnerships between EPA, federal partners, industry, and associations will help design and build the way into a green federal IT infrastructure.

3 PAGE 3 THE GREEN GRID The Green Grid is a global consortium of companies dedicated to advancing energy effi ciency in data centers and computing ecosystems. The Green Grid does not endorse any vendor-specifi c products or solutions, and will seek to provide industry-wide recommendations on best practices, metrics and technologies that will improve overall data center energy effi ciencies. Membership is open to companies interested in data center operational effi ciency at the Contributing or General Member level. General members attend and participate in general meetings of The Green Grid, review proposals for specifi cations and have access to specifi cations for test suites and design guidelines and IP licensing. Additional benefi ts for contributor members include participation and voting rights in committees and working groups. Additional information is available at www. thegreengrid.org. MOU BACKGROUND The EPA and The Green Grid signed an MOU in April 2008 to establish a working agreement for the assessment of a small to mid-sized EPA data center. The intent was to provide specifi c recommendations for energy effi ciency improvements in the subject data center but also to share the results, methodologies, and recommendations across the EPA and related governmental agencies. The Green Grid created a task force of its member companies to review the task, perform the assessment, and work with the EPA on promulgation of the results and recommendations. The collaboration defi ned by the two organizations includes: (A) JOINT PROGRAM GOALS Identify an existing small EPA computer/server room as a target for an energy effi ciency showcase Defi ne and execute a publicly transparent project demonstrating the feasibility, approach, and benefi ts of optimizing an existing EPA computer/server room for energy effi ciency Share real-world lessons learned, best practices, and results with other governmental agencies and industry stakeholders Quantify the expected operational benefi ts achievable for the target small computer/server room as well as other similar small computer/server rooms Ensure that all joint promotional and outreach materials receive approval from EPA s Offi ce of Public Affairs (B) EPA RESPONSIBILITIES Provide a core team of EPA personnel to work with The Green Grid to jointly identify an existing small EPA computer/server room for the project Identify a project lead and project participants, along with the availability of subject matter experts and background data, including computer/server room constraints and service requirements, as required by the project Provide the project team with working-level access to the facility to perform the assessment. Ensure that the EPA showcase project supports and illustrates the initiatives of the ENERGY STAR for Data Centers program Participate in regularly scheduled status meetings and review written status updates at a frequency agreed to by the joint project team As appropriate, schedule and hold interagency project reviews for EPA s chief information offi cer

4 PAGE 4 (C) THE GREEN GRID RESPONSIBILITIES Provide a team to perform the assessment Defi ne and manage the assessment project Document assessment processes, fi ndings, and all recommendations, including best practices and metrics with expected impact on energy effi ciency of the EPA computer/server room Conduct a comparison analysis between the assessed EPA computer/server room and existing and emerging computer facilities Provide, as a loan from participating member companies of The Green Grid, any equipment required during the assessment phase Conduct regularly scheduled status meetings and provide written status updates at a frequency agreed to by the joint project team The Green Grid team fi rst met with the EPA in June 2008 at One Potomac Yard in the Crystal City complex near Washington D.C. Team members toured and discussed the local data center; their initial impression was that the data center was reasonably new (built within the last 3 years) and that, as of yet, it was not heavily loaded. The team felt that although the subject data center presented opportunities for effi ciency changes, it was perhaps not as typical (in age or utilization) as was expected when the MOU was written. The team members and EPA participants together discussed the possibility of looking at a second data center of an earlier vintage. The Green Grid team then visited the EPA data centers in the USEPA East building in downtown Washington D.C. and reviewed all the rooms comprising the two data centers. These data centers were older, more fully utilized, and at fi rst review presented greater opportunities for energy effi ciency gains. The EPA and The Green Grid participants discussed the idea of covering both buildings in their assessment work and agreed to attempt it. Signifi cant work went into preparing for the assessment and pulling together the data needed as well as the design documentation on the existing data centers. The Green Grid assessment team visited the two data centers the week of November 17th, with a full day s assessment taking place at each site. During this time, the team gathered data on power, cooling, room layout, and IT processes. In general, the assessment at One Potomac Yard (OPY) yielded more data and specifi c results that could be used to measure and assess its current effi ciency. The data center at EPA East (East) was more complex than the team had expected at the outset. The organic growth of the data center over the last 10 years had created a situation in which even the building engineering staff (GSA) did not have a clear picture of all of the services being provided to the spaces and their sources. Given the limited time on-site, this made it impossible to collect enough data to even estimate what the effi ciency of the East data center may be. Instead, the team focused on identifying areas where effi ciency improvements could be achieved there. These improvement opportunities will generally apply to the majority of data centers of this nature. A separate report focusing on the East data center opportunities will be the next step following this initial report. This report focuses primarily on the One Potomac Yard data center with regard to effi ciency measures, but many of its general recommendations will apply to the majority of the government and private-sector data centers of similar size.

5 PAGE 5 BACKGROUND ON EPA MID-TIER DATA CENTER AT POTOMAC YARD The OPY data center is roughly 2,500 square feet and consists primarily of individual EPA program offi ces servers and IT equipment, shared servers, and operations servers hosted in an EPA-run data center. There are currently 59 server cabinets on an 18-inch raised fl oor with fi ve peripheral computer room air conditioning units (CRACs) in the space. These fi ve CRACs are connected to the OPY campus chilled-water system. The room is powered from a central uninterruptible power supply (UPS) rated at 400 kva/360 kw feeding two power distribution units (PDUs) in the space. The PDUs step the UPS 480V output down to 208Y/120 and distribute power to the racks. The OPY data center is one of approximately 100 EPA computer rooms across 45 locations around the country. EPA also owns an enterprise-class data center, the National Computer Center in Research Triangle Park, North Carolina. A series of energy conservation measures and activities are underway at that facility. All of EPA s computer centers combined support more than 25,000 employees with everything from services to scientifi c computing. SIZE OF THE PROBLEM The OPY data center can be considered a mid-tier data center in the context of a recent IDC report on data centers 1. The IDC taxonomy includes server closets, server rooms, localized data centers, mid-tier data centers, and enterprise-class data centers. The intent of this initial EPA data center assessment report is to apply to the localized and mid-tier data centers. The closets and server rooms can make use of some best practices but, for example, the spaces in these data centers rooms are generally too small to have enough racks to even consider a hot-aisle/cold-aisle confi guration. The IDC report suggests there are roughly 75,000 localized and mid-tier data centers in the U.S. alone. While the power consumed at OPY may not be that signifi cant in and of itself, the magnitude of all the localized and mid-tier data centers is signifi cant. It is interesting to note that IDC estimates that there are 3.6 million servers in closets and rooms, 3.2 million servers in the localized and mid-tier, and roughly 3.1 million servers in the enterprise-class space. Therefore, this report directly applies to roughly one-third of the servers in the United States. The assessment identifi ed a number of issues detailed later in this report that have contributed to the current state of effi ciency at OPY. Some of these issues cannot be addressed with simple hardware fi xes or set-point changes; others require more invasive procedures. Still, there are a number of programmatic issues that can be addressed by the EPA and governmental agencies. It is fi tting that this assessment report includes some recommendations for direct action in these particular data centers along with recommendations for programmatic opportunities and policy changes that can improve the effi ciency of mid-tier data centers industry-wide. The Green Grid believes that these specifi c opportunities represent the assessment s most signifi cant benefi ts and where The Green Grid can best be of service in working with the EPA.

6 PAGE 6 FIGURE 1 BELOW DEPICTS THE DATA CENTER AT OPY, INCLUDING ITS LAYOUT, INFRASTRUCTURE EQUIPMENT, PERFORATED TILES, AND IT RACKS FIGURE 1. ONE POTOMAC YARD DATA CENTER LAYOUT ASSESSMENT METHODOLOGIES A data center assessment can be performed for many different reasons. The primary motivation is desire to improve energy effi ciency in a current data center. But that motivation can be based on a number of factors. First could be cost the power bill associated with a data center, particularly an ineffi cient one, can be an issue. Increasing a data center s effi ciency can reduce its energy bill to a competitive level. The second factor may be capacity, as it relates to space, power, cooling, or airfl ow. Cooling and airfl ow are certainly linked, but they are not exactly the same, and while a data center may have enough chilled water and its CRAC may have adequate coil size to remove the heat, the fans in the space may not be suffi cient to provide the requisite air to each server and may actually be the constraint (or vice versa). Boosting effi ciency can be one of the most effective ways to gain capacity. For example, if available power is limited and the room can be made more effi cient, the extra power recovered could be used to support additional computing resources. PUE The primary result of an assessment might be the power usage effectiveness (PUE) of the data center, a Green Grid metric that has rapidly been adopted by the IT industry as the primary measure of a data center s effi ciency. PUE is defi ned as total power divided by IT power 2. Included in total power is the IT power, losses in the power distribution, lighting power, and cooling power. (Note that the metric is most useful and appropriate when it is integrated over time and it results in energy divided by energy. However, a snapshot in units of power divided by power is still useful.) A mathematically ideal PUE would be 1.0. This theoretical case is where 100% of the power from the utility is delivered to the IT equipment, with none being used by the infrastructure. Any value over 1.0 represents the infrastructure tax to run the data center. A value of 2.0 means that an equal amount of power is going to the cooling, lighting, and power ineffi ciencies as is going to the IT equipment itself.

7 PAGE 7 Anytime a data center can reduce the tax, it makes additional power available to the IT equipment or reduces cost. Knowing the PUE will allow the data center operator to accomplish two things. The fi rst and most important is using the PUE metric to track that data center s effi ciency over time. Any changes to the physical infrastructure should result in a reduced PUE period over period. The second is comparing the data center to industry norms and best-in-class values; PUE provides a way to benchmark a data center against others. This activity is helpful but must be accomplished carefully. There are many factors that modify PUE, such as local climate; the level of overall data center reliability; hourly, daily, or weekly data-processing load changes; assumptions made where actual data is not available; and even what is included in deriving PUE (e.g., one site may not include the lighting). A detailed understanding of all these factors is required to allow a strict comparison. Short of that, the comparison must be based on an approximate basis (e.g., a PUE value of 2.2 may not be any different than a value of 2.1). A data center assessment also would include a review of the existing confi guration against best practices for cooling, power distribution, room layout, etc. The other signifi cant part of any thorough assessment would be a review of the documentation and processes around the data center and the IT equipment. This should include all activities from IT equipment specifi cation to procurement, installation, operation, utilization, and asset management through the end of the equipment s life. TOOLS The primary tool set for an assessment includes: a defi ned assessment process to ensure each infrastructure component is documented, a well-organized data sheet to record the data, a high-resolution digital camera, a clipboard, and a pencil. Considerable data needs to be collected and collated with existing design basis documentation to understand the intent and the quality of the design and the success of its implementation. Additional required tools are power-metering equipment as well as temperature, humidity, and airfl ow instrumentation. More detailed information about a data center also can be found using other tools, such as computational fl uid dynamics (CFD) and infrared (IR) thermography. CFD provides a room-level model of the airfl ow, temperatures, and pressures to determine the overall thermal solution s effectiveness. Signifi cant proposed changes are best evaluated using CFD tools to determine the impact on the room computationally, prior to making a hardware change. IR thermography can be used to fi nd and identify hot spots and thermal patterns in the data center far more readily than moving a temperature sensor around the space. IR thermography provides a temperature-based color picture of the surface temperatures in the room. All of the tools listed above were employed in the assessment of OPY.

8 PAGE 8 FINDINGS AND RECOMMENDATIONS POWER The Green Grid assessment team found several key issues with the power distribution system at OPY, which affect both its effi ciency and operations. The UPS in the OPY data center is rated at 400 kva/360 kw output. The UPS output was lightly loaded at 67 kw, or approximately 19% of rated capacity. At this power level, the fi xed losses within the UPS dominate, resulting in a UPS effi ciency of approximately 75%. The UPS is running far down on its utilization curve, which produces considerable ineffi ciency in the power distribution chain. Signifi cant additional capacity exists for the electrical power system. Industry best practices recommend up to 80% load on a UPS output for data centers with OPY s current power system redundancy confi guration. OPY s UPS and PDU documentation was outdated and missing some critical information. The UPS utility supply should be documented on the UPS. The PDU panel schedules should be updated by performing a circuit trace of each circuit. The load on each output breaker should be measured and documented. At OPY the UPS is located in the middle of the raised-fl oor area where it uses up valuable fl oor space and contributes a signifi cant heat load to the data center (20 kw). The CRACs must work to remove the excess heat, which represents an expensive UPS cooling scenario and contributes to the data center s ineffi ciency. While relocating the unit outside of the raised-fl oor area would be complex, it should be considered if a signifi cant capacity expansion is planned for OPY. As noted earlier, OPY s power distribution system has a gap in that the panel schedules are out of date. A one-line diagram that shows the power from the feed to the UPS to the racks was not available. It is unclear what the specifi c power path is from the UPS to the server racks. This should be corrected as soon as possible as it is only going to get more diffi cult to fi x and will create additional problems when the data center becomes more heavily loaded. This should not be considered simply an effi ciency issue; operationally, the lack of a power-path diagram will eventually strand capacity as well. Electrical contractors can be hired/tasked to complete this work. Further, for the future addition and removal of electrical gear, OPY should implement a detailed process that mandates an update of the panel schedules and the one-line diagram. Power costs at the OPY complex vary seasonally: 7 to 7.23 /kwh for June through September and 6.52 /kwh October through May, with an annual average of 6.74 /kwh. These costs are below the national average*. Extensive metering is being added to the data center at OPY, which will prepare the data center well for future growth. This metering will benefi t the operations and building staff in tracking energy use in the different sections of the data center. Unfortunately, The Green Grid assessment team could not comment on its completeness because no one-line diagram exists on which to show whether the metering captures all critical points in the power distribution plan. The team did discover that the local utility is working on an energy effi ciency incentive program. The EPA should explore this further to determine what credits could be obtained for future effi ciency improvements * Power rates provided by the building owner. National average can be found at the eia.gov website, All sectors national average through March 09 was $0.10 per kwh.

9 PAGE 9 made to the data center. In fact, this should probably be the very next step most utility incentive programs require submission of an application and project plan prior to obtaining approvals and beginning work. The Green Grid is aware of a number of such programs and would be happy to connect the local utility to others that have successfully implemented such programs in other areas. COOLING The cooling system at OPY consists of fi ve CRACs, each fed by the building chilled-water system. One problem with determining the effi ciency of the data center s overall cooling system is that the data center relies on the overall campus system for a signifi cant part of its cooling. While this is a more effi cient practice than, for example, using direct expansion chillers associated with each CRAC, it does make the overall effi ciency harder to measure. The building engineer would need to provide an energy cost for each volume of chilled water and each degree of temperature rise of the chilled water that serves the room. A way to measure that fl ow rate and temperature rise also is needed. Because it lacked that information, the assessment team made the reasonable assumption that the entire data center cooling load was placed upon the chilled-water system, and it applied a general effi ciency value based on those of modern, large-scale chilled-water systems. Like the power system, the OPY cooling system, is currently underutilized. The units are nominally rated at 180,000 BTU/H or 53 kw of cooling. Assuming an N+1 confi guration, the data center has roughly 200 kw of cooling capacity and is using approximately half that amount. The team checked the CRAC unit airfl ows and found them to be less than the rated value of 8750 CFM per unit. The existing units ranged from 6106 to 7550 CFM. The team also completed a CFD analysis on the OPY data center. Depicted in multiple fi gures in this report, the analysis returned airfl ow values within a couple percentage points of the measured values, which is considered to be in very good agreement. One of the issues with the CRACs was that the ceiling tiles above the CRACs were not cut to fi t the CRAC inlet and were actually closing off the airfl ow. The CRAC units were controlled on return air temperatures and humidities. This is one of the simplest and most pervasive strategies in the industry. Unfortunately, it is also the least effi cient control system from an energy perspective. Imagine twice as much airfl ow as needed fl owing into the room, with very cool air bypassing the servers and mixing with the other half of the server outlet, which is very warm. The control system has no way of knowing this imbalance and overcooling is occurring. However, it is a fairly simple and robust system when the set points are low enough that the servers generally receive the temperature of air they need, even in a room with poor airfl ow management. The key word in the last sentence is generally. Even with very low return-air set points (68 F), OPY had a number of hot spots, where the inlet temperature was at the 2004 ASHRAE thermal guideline upper limit of 77 F. (See Table 1 for actual measured temperature and airfl ow results.) There was one rack temperature that even exceeded the 2008 ASRAE upper limit guideline of 80 F 3. This indicates recirculated air from the back of an adjacent rack coming around or over the top of the racks to create a server inlet temperature that is too high. Low CRAC return temperatures also limit the cooling capacity of the CRAC cooling coil the closer the return temperature is to the chilled-water temperature, the smaller the resultant cooling capacity. Conversely, changing the server inlet temperatures to nearer the upper limit of the ASHRAE recommended range will increase CRAC unit capacity.

10 PAGE 10 The controls of the CRAC unit could possibly be moved to the supply side/cold aisle to ensure that proper temperatures are being obtained where the servers need the cool air. In addition, the set points for the CRACs should be reevaluated. The current set points of 68 F (as seen in Table 2) for return air are actually more than cool enough to be used for supply air. ASHRAE recommends a range of inlet air temperatures from 65 F to 80 F. The current set points in OPY are actually closer to the cooler end of the supply temperature range. The humidity set points could be lowered to achieve better energy effi ciency. They currently are set at between 45% and 50% relative humidity (RH). This disparity among the CRACs can cause the units to fi ght, and that simultaneous humidifi cation and dehumidifi cation can have a major impact on energy consumption. In addition, the 2008 ASHRAE guidelines suggest a low-end humidity dew point of 42 F. At 68 F, this is roughly 38% RH. It would be an even lower RH at a warmer return temperature, an adjustment that would produce energy savings as well as water savings. TABLE 1. RACK INLET TEMPERATURES IN ONE POTOMAC YARD However, none of the aforementioned cooling-control realignments should take place on their own. An overall engineering analysis should be accomplished to look at improving airfl ow management and modifying the controls scheme and set points concurrently. Signifi cant cooling capacity increases could be had in the space to greatly increase the number of servers that could be housed in OPY, but conditions such as the racks existing hot spots should be taken into consideration. Variable speed drives (VSDs) present another opportunity for the OPY data center. Its particular CRACs do not have VSDs, which causes all the units to run at full fan speed, regardless of need. One of the best ways to reduce cooling energy is to use VSDs to turn down the airfl ow to just meet the room s requirements. The existing units data sheets did not show VSDs even as an ordering option, so it is unlikely that they could be added as a retrofi t. Another option one that would require a detailed engineering study would be sequencing the number of CRAC units running. As the load diminishes, an extra CRAC could be placed in an idle/off position. This scheme has been used successfully in other data centers, but a study must consider the ability of all the CRACs to supply all areas of the underfl oor pleneum.

11 PAGE 11 In general, the OPY data center has a good concept of airfl ow management with its strategy of getting the hot air above the suspended ceiling. Unfortunately, the method in practice is not as successfully implemented as could be. There are a number of options to better contain the hot air and get it directly above the suspended ceiling, but these are outside the scope of this initial assessment report. TABLE 2. CRAC SET POINTS AND OPERATIONAL DATA FOR OPY Additional comments from the assessment team are listed below. More emphasis should be placed on the day-to-day focus on airfl ow management. One example of this is shown in Figure 2 where the cableway impedes the airfl ow and hampers cooling. While in the past many data centers practiced commingling cableways and airfl ow in the same physical space, it is now recommended that all cables be migrated to overhead cable distribution. Further observations by the assessment team include: Several data center improvements have been made since the last visit in June 2008 There are openings under server cabinets that could not be measured but have been approximated in the model data results Airfl ow measured at CRAC units is lower than nominal 8750 CFM There is higher CRAC static pressure (and lower airfl ow) due to underfl oor and above-ceiling air obstructions (beams and ductwork) Some racks had glass doors on the front. These should be removed and precluded from reuse, and future procurement should never include racks with solid doors of any type There is a partially blocked return air duct opening in the ceiling A complete lack of blanking panels in the unused U-spaces within the racks is allowing recirculation of heated air and bypass of conditioned air to the rack equipment A complete lack of solid panels on the sides of the racks is allowing recirculation of heated air and bypass of conditioned air to the rack equipment A partial lack of solid panels on the tops of the racks is allowing recirculation of heated air and bypass of conditioned air to the rack equipment CRAC temperature and humidity set points vary CRAC units have the potential of fi ghting one another CRAC return air temperatures fall mostly below 70 F, resulting in reduced CRAC cooling capacity Cable trays and power distribution under the fl oor impede air distribution and air fl ow

12 PAGE 12 FIGURE 2. OPY CABLEWAY INSTALLATION INTERFERES WITH AIRFLOW PATH

13 PAGE 13 FIGURE 3. FLOW AND PRESSURE RESULTS FROM COMPUTATIONAL FLUID DYNAMICS (CFD) MODELING Figure 3 and Figure 4 are results of the CFD analysis. The CFD matched the modeled fl ows well with the measured data at the room level. Figure 3 shows the velocity profi le of the air under the raised fl oor. This level of analysis should be repeated if the data center plans to enact some of the changes listed above to increase its carrying capacity. Figure 4 shows the temperatures at the top of the racks. The CFD analysis does not illustrate the several hot spots that the team measured in the space at some server inlets. This is primarily because, in a single day s assessment, the team was unable to gather suffi cient data to provide that level of detail in the analysis. To deliver a more rigorous analysis, each server s airfl ow would need to be better and more precisely understood to capture all local hot spots. FIGURE 4. CFD ANALYSIS RESULTS SHOWING TEMPERATURE PROFILE AT 6 FEET ABOVE THE FLOOR

14 PAGE 14 IT AND MANAGEABILITY The EPA s IT equipment is primarily purchased by the program offi ces and delivered to the OPY and East data centers for installation and operation. The EPA uses a standard confi guration document 4 for ensuring that the server has the appropriate security and networking setup upon its arrival. The Green Grid recommends that the EPA expand the setup process to include energy effi ciency, power, and cooling information as well. The standard confi guration should include turning on power management features in the server, setting servers in power-saving cooling mode where applicable, enabling fan speed control, and setting any other features to ensure the server will be consuming the minimum amount of energy during its operation. As new servers are brought into the space, many of them will have enhanced platform monitoring capabilities such as front-panel temperature information and server power consumption. Tracking these data points will give the operations staff a more useful power and thermal map of the data center and allow better planning and provisioning going forward. The OPY data center does not currently track its servers for utilization. CPU utilization is potentially tracked during trouble shooting, but it is not regularly monitored. The Green Grid suggests that all servers be routinely monitored for utilization, primarily for the purpose of understanding the use of the IT asset and preparing the EPA mid-tier data centers for potential virtualization. Currently, no virtualization is in place in OPY. The Green Grid is confi dent that when server utilization is tracked, the lack of load on many of the servers could present a compelling case for virtualizing to meet the future IT needs of the EPA. The EPA should perform a formal utilization and usage study to assess the benefi t of server consolidation and/or virtualization. A number of The Green Grid member companies have this capability, largely as an available tool or service that can give specifi cs regarding the potential gain or ROI. OPERATIONS AND PROCESS In general, OPY racks were lightly populated. During the initial walkthrough of the OPY data center, the assessment team noticed a single 1U server in a 42U high rack. (A U is a term commonly used to describe the height of a component installed in an electronics rack, with 1 U equal to 1.75 inches.) Under-provisioning to this extent is highly ineffi cient from an energy standpoint and wasteful of the very expensive data center space. Billing the program offi ces for their IT assets can control waste. If the individual departments that own or purchased the assets are required to pay the fully burdened cost of their impact to the data center, it would communicate the actual TCO and encourage energy effi cient behaviors. Charging a department based on the number of Us it uses and its power connection potential can drive the right behavior, and a better use of space and resources. For example, if a department with a 1U server in a rack by itself had to pay for the potential energy use in all 42Us in the rack, it may opt for a more effi cient approach.. This same type of strategy also could work to prompt EPA IT customers to adopt virtualization. The benefi ts of virtualization are outside the scope of this report, but concerns of data security and availability have all been successfully addressed and the EPA should lead the federal government s virtualization effort. The energy savings available in a virtualized environment will generally exceed those obtained through a more effi cient infrastructure. If appropriate charges were developed for individual departments, they could be presented with the opportunity to buy space and power in an EPA virtualized server environment or pay more for their own server and power use. Fair costing could be the motivation to drive virtualization in the EPA data centers.

15 PAGE 15 The Green Grid realizes that these strategy changes are not trivial, nor implementable at an individual data center level. When the assessment team fi rst reviewed OPY, the data center s energy bill was part of the overall fee for the lease of the complex, so even at the largest scale of the data center, the fi nancial incentive to reduce energy was not directly there. The EPA has since changed that situation and is installing additional metering. Even though these types of changes are complex, The Green Grid believes that programmatic changes such as these represent the best opportunity for an energy savings benefi t at an industry or national level. The EPA does have a methodology for its OPY and other data centers to ensure that all servers remain useful and operations occur as they should. Staff monitor network activity for each server, making adjustments when activity drops off and when there is no network traffi c for extended periods of time. It is not clear if this practice is institutionalized, but ensuring that servers are not abandoned in place or left to idle is an easy win for reducing energy use. The Green Grid s fi nal recommendations relate to the procurement area, where additional improvements can be made to increase the data centers needs for enhanced effi ciency and lower power/cooling impacts. It could be benefi cial to provide a formalized guideline to individual teams needing information and communication technology (ICT) equipment. The EPA could potentially require teams to complete a checklist ensuring that the servers have a Climate Savers Computing Initiative power supply, that dual power supplies are justifi ed, that the proposed platform performs well on the SPECpower benchmark, that the server is rightsized and only the appropriate confi guration is specifi ed, and that the actual power and cooling loads are estimated by the supplier. GENERAL The EPA is developing an ENERGY STAR for Data Centers program. The Green Grid applauds this effort. In fact, a number of The Green Grid member companies have data centers involved in the process of submitting test data as the program is ramped up. It is informative to note that OPY is not eligible for involvement in the process. To meet the ENERGY STAR criteria, a signifi cant number of operational parameters and effi ciencies must be measured and reported, which is not possible at the subject data center. As discussed, OPY is a mid-tier data center. The ENERGY STAR program is best suited for enterprise-class data centers, where the total cost of the power dictates that the data center put in place additional instrumentation and visibility into system operations. This is in no way a criticism of the EPA in general or these data centers in particular. Instead, it points out an opportunity. While the ENERGY STAR program will be a key tool for driving effi ciency within large data centers, the mid-tier population also can be better supported, if not with direct ENERGY STAR recognition, then possibly with training programs and additional collateral to promote effi ciency in localized and mid-tier data centers. The Green Grid and the EPA should continue to work together to lay out such a program. OPY OVERVIEW The assessment team estimated the PUE for the OPY data center to be 2.9. It should be noted that there are certain assumptions in this number; these are discussed in Appendix A. The team estimated the total power for the data center during the assessment to be 132 kw. Approximately 45 kw were being used by the IT equipment, and 87 kw were being consumed by cooling, power distribution, and lighting. OPY s density is currently less than 25 W/sq ft.

16 PAGE 16 One important factor contributing to this high a PUE is the underutilization of the power and cooling equipment and the space. As the room fi lls, the PUE will naturally decrease somewhat. (Recall in the previous section on power the discussion of UPS and how being further down the percent utilization curve signifi cantly affects effi ciency.) That said, The Green Grid does not expect an increase in utilization to make a substantial change to the OPY data center s PUE. The consensus of the team is that the PUE would still be over 2.0. If OPY s load is assumed to remain constant 24 hours a day, 7 days a week, 365 days a year (not a likely scenario but acceptable for this paper s purposes), the total energy used for the year would be roughly 1.1 million kwhs, which equates to an annual electric bill of about $75,000. Assuming that with a very concerted effort, including signifi cant capital improvements and operational changes the existing OPY infrastructure could be improved to a PUE of 1.8, then the annual electric bill could be reduced to about $48,000. Recall the generous assumption above that OPY s loads are 24x7x365. A typical program offi ce data center has regular peaks during the work week and lower utilization during evenings and weekends, so it will use less than the work-day-measured amount on an annual basis. Therefore, it can be concluded that the effort to get to a PUE of 1.8 would yield a savings of less than $2,000 a month for OPY. Even if the target PUE were a very aggressive 1.3 (which would require signifi cant capital and operational changes), the savings would still be less than $3,000 a month. These monthly numbers will be further reduced if, through expected growth in the amount of IT equipment in the room, the current underutilization-driven portion of the PUE is decreased. It is unlikely that the project costs to obtain this improved effi ciency could be recouped in any reasonable amount of time if based solely on the savings in the utility bills. The conclusion here should not be that effi ciency is not worth pursuing. Effi ciency needs to be built in to every change and addition to the data center versus a major capital project expecting an ROI justifi cation. Best practices and low cost changes, per the listed recommendations should still be pursued, but will not signifi cantly reduce the PUE. SPECIFIC OPPORTUNITIES If there is a silver bullet for this data center, it would be in the potential for a data center consolidation. OPY has considerable capacity from a space, power, and cooling perspective. It would take a detailed engineering study to validate the opportunity, but closing other data centers and moving the IT equipment into OPY could offer signifi cant savings. As it currently is confi gured, OPY is not ready for such a consolidation, but the move to allow for a much higher density would be relatively simple and, at fi rst pass, would not require any additional major infrastructure components. There are a number of concepts for taking the existing space and converting it to a medium-density data center. This would result in signifi cant operational and space savings for the EPA. Many of the detailed parametric recommendations above would need to be considered to make such a change successful. While the facility upgrades would improve the infrastructure effi ciency, even more important would be a consolidation to newer, more effi cient servers and a review for the application of virtualization wherever possible. Low utilization of IT hardware is as much a problem for effi ciency as is a high PUE.

17 PAGE 17 PROGRAMMATIC OPPORTUNITIES The limited potential for an ROI-based upgrade for localized and mid-tier data centers suggests that opportunities for effi ciency improvements and energy use reductions be identifi ed elsewhere. The Green Grid recommends several programmatic changes that, if implemented by the EPA and the federal government, could be very benefi cial in providing localized and mid-tier data centers with the tools they need to promote and obtain higher effi ciencies. INSTALLATION TRAINING Require a minimum amount of training for all contractors, engineers, and designers working in the data center. Figure 2 demonstrates the need for proper installation training, which could address the following: Airfl ow blockages Hot aisle/cold aisle Airfl ow path, CRAC to server Cable management Airfl ow management and sealing of openings MINIMUM STANDARDS FOR PURCHASING Develop a list of considerations for procurement and design for data centers. This would provide a minimum set of criteria to ensure effi ciency and best practices, such as: VSDs on all CRACs and computer room air handlers (CRAHs) Minimum effi ciency ratings for UPSs Rack confi gurations with front-to-back airfl ow (e.g., no sealed/glass doors and solid-panel sides and tops) Cable brushes on all cable openings Others as defi ned NEXT STEPS AND FUTURE WORK The Green Grid believes that a signifi cant portion of the MOU has been successfully completed in performing the assessment of the data center. It is now in the process of completing additional detailed reports about data centers for the specifi c use of the data centers operations staff. The Green Grid is eager to work with the EPA on the next phase of the project, that of developing educational collateral that leverages the results of this work so that other localized and mid-tier data centers can benefi t from the team s fi ndings and recommendations. The Green Grid believes that consolidation may represent a signifi cant savings opportunity for the industry. Recall in the earlier discussion of data center taxonomy that the largest sector was that of the server closet and server room. The ineffi ciencies and low utilization of servers in this sector represents major room for improvement. Many of these 3.6 million servers could deliver energy savings if they were consolidated and located in larger, better-run, more effi ciently designed data centers.

18 PAGE 18 The Green Grid believes that several useful areas for materials, collateral information, and business processes should be developed. These areas include: Contractor/data center support staff training A standard confi guration document to include power and thermal issues Guidelines for procurement activities related to energy A process for fairly charging data center customers for the space and energy impacts of their servers Some or all of the above may exist and, if so, that is good news. The Green Grid assessment team would be eager to review anything that was not evident in the short assessment visit. SUMMARY Every time The Green Grid explores another data center, it is a valuable educational experience. This project was no exception. From the extent of the preparatory work required, to the benefi t of an accommodating local staff, to the amount of time and people needed to do the job properly on-site, The Green Grid continues to learn that gaining energy effi ciency is always hard work. The Green Grid appreciates the opportunity to work with the EPA on this task. The Green Grid also has learned or perhaps reaffi rmed that there is no magic or easy solution. The most interesting results from the assessment are the extensive benefi ts that the team believes can be derived from the recommended programmatic efforts. The Green Grid is pleased to present this summary report and looks forward to continued collaboration with the EPA on this vital area of interest. ACKNOWLEDGEMENTS The Green Grid would like to express our sincere gratitude to all those from the EPA and their affi liated organizations who supported The Green Grid s activities. They include Maureen Dowling of Jones Land LaSalle (OPY management company) and Helen Smith and Yvette Jackson of the Offi ce of Administration & Resources Management (OARM). A special thank you is due to Mary McCaffery for her leadership and encouragement and particularly to James Freeman, who was our main contact at the EPA and only through his many hours of effort were we able to accomplish what we did.

19 PAGE 19 REFERENCES 1. IDC, April 2006, IDC #06C4799, Special Study, Data Center of the Future 2. The Green Grid Data Center Power Effi ciency Metrics: PUE and DCiE, The Green Grid, Power-Effi ciency-metrics-pue-and-dcie.aspx ASHRAE Environmental Guidelines for Datacom Equipment -Expanding the Recommended Environmental Envelope-, ASHRAE, 2008, 4. EPA Microsoft Windows Server 2003 Standard Confi guration Document, Offi ce of Environmental Information, Offi ce of Technology Operations and Planning, Version 2.0, Sept 18, 2008

20 PAGE 20 EPA ASSESSMENT APPENDIX A: ONE POTOMAC YARD PUE CALCULATION TotalPower PUE = ITPower Total Power = IT Power + Cooling Power + Power Losses in Electrical Distribution + Lighting IT POWER IT Power = Power at the server plug We were not able to measure this specifi cally but PDU output is a very good proxy for this if one is comfortable that it captures all the IT loads. The only gap is the line losses between the PDUs and IT equipment but this is generally very low (unless you are working with a low voltage such as 48 Vdc). The output of the PDUs in the room was measured at about 45 kw. LIGHTING We assumed 1.5 w/sq-ft across the 2544 sq-ft of data center for 3.8 kw, we assumed these were NOT on the UPS/PDU circuits. POWER LOSSES IN ELECTRICAL DISTRIBUTION The losses are contained in the conversions in the UPS and the PDU. These were signifi cant. The UPS-feed was 87 kw, UPS-out was 67 kw. The PDU-feed was 67 kw and the PDU-out was 45 kw. These are very high ineffi ciencies, which are primarily due to the low utilization. Total power lost in distribution is 42 kw. COOLING POWER Cooling is comprised of the CRAC units and the central plant. Limited measurements were available. The are 5 primary CRAC units, with off line back-ups. The 5 do NOT have VFDs. They each had a humidifi er. As we were unable to get actual measurements, we use name-plate value. This is rarely an ideal case, but as there were no VFDs it is somewhat more palatable. The CRAC fans were listed at 3.7 kw each (x5 = 18.5 kw) The humidity set points and process variables showed the room was calling for humidifi cation. The CRACs cycle on / off for humidifi cation, we assumed 1 on, 4 off. This is a reasonable assumption, and possibly generous as it could have been more, but the cycle period for on/off was not measured. Humidifi ers use 5.1 kw when on. The central plant was unable to be measured and detailed data not collectable during the short visit, so we have applied rules of thumb. A typically assumed value for a well run central chilled water plant with cooling towers is 0.53 kw/ton of cooling (a value of 0.72 kw/ton could be applied in the case of DX CRACs). The room cooling load was assumed to be the total UPS-feed + CRAC power used + lighting. All UPS, PDU, and IT losses occurred in the data center.

21 PAGE 21 Room cooling load: CRACs (18.5) + Humid (5.1) + Lighting (3.8) + UPS-feed (87) = kw so that the additional load on the central cooling system (@0.53 kw/ton) was 17.3 kw. Total cooling load is CRAC+Humid+Central= =40.9 kw PUE CALCULATION: IT IT 45 Lighting Lighting 3.8 Power Losses UPS 20 PDU 22 Cooling CRACs 18.5 Humid 5.1 Chiller 17.3 All values in kw TotalPower PUE = ITPower = IT + Light + Pwr + Cool IT =

IMPACT OF VIRTUALIZATION ON DATA CENTER PHYSICAL INFRASTRUCTURE

IMPACT OF VIRTUALIZATION ON DATA CENTER PHYSICAL INFRASTRUCTURE WHITE PAPER #27 IMPACT OF VIRTUALIZATION ON DATA CENTER PHYSICAL INFRASTRUCTURE EDITOR: DENNIS BOULEY, APC BY SCHNEIDER ELECTRIC PAGE 2 EXECUTIVE SUMMARY Data center virtualization, which includes storage,

More information

How To Improve Energy Efficiency Through Raising Inlet Temperatures

How To Improve Energy Efficiency Through Raising Inlet Temperatures Data Center Operating Cost Savings Realized by Air Flow Management and Increased Rack Inlet Temperatures William Seeber Stephen Seeber Mid Atlantic Infrared Services, Inc. 5309 Mohican Road Bethesda, MD

More information

Reducing Data Center Energy Consumption

Reducing Data Center Energy Consumption Reducing Data Center Energy Consumption By John Judge, Member ASHRAE; Jack Pouchet, Anand Ekbote, and Sachin Dixit Rising data center energy consumption and increasing energy costs have combined to elevate

More information

Improving Data Center Energy Efficiency Through Environmental Optimization

Improving Data Center Energy Efficiency Through Environmental Optimization Improving Data Center Energy Efficiency Through Environmental Optimization How Fine-Tuning Humidity, Airflows, and Temperature Dramatically Cuts Cooling Costs William Seeber Stephen Seeber Mid Atlantic

More information

USAGE AND PUBLIC REPORTING GUIDELINES FOR THE GREEN GRID S INFRASTRUCTURE METRICS (PUE/DCIE)

USAGE AND PUBLIC REPORTING GUIDELINES FOR THE GREEN GRID S INFRASTRUCTURE METRICS (PUE/DCIE) WHITE PAPER #22 USAGE AND PUBLIC REPORTING GUIDELINES FOR THE GREEN GRID S INFRASTRUCTURE METRICS (PUE/DCIE) EDITORS: JON HAAS, INTEL JAMIE FROEDGE, EMERSON NETWORK POWER CONTRIBUTORS: JOHN PFLUEGER, DELL

More information

AEGIS DATA CENTER SERVICES POWER AND COOLING ANALYSIS SERVICE SUMMARY

AEGIS DATA CENTER SERVICES POWER AND COOLING ANALYSIS SERVICE SUMMARY AEGIS DATA CENTER SERVICES POWER AND COOLING ANALYSIS SERVICE SUMMARY The Aegis Services Power and Assessment Service provides an assessment and analysis of your data center facility and critical physical

More information

GREEN GRID DATA CENTER POWER EFFICIENCY METRICS: PUE AND DCIE

GREEN GRID DATA CENTER POWER EFFICIENCY METRICS: PUE AND DCIE WHITE PAPER #6 GREEN GRID DATA CENTER POWER EFFICIENCY METRICS: PUE AND DCIE EDITORS: CHRISTIAN BELADY, MICROSOFT CONTRIBUTORS: ANDY RAWSON, AMD JOHN PFLEUGER, DELL TAHIR CADER, SPRAYCOOL PAGE 2 ABSTRACT

More information

Green Data Centre Design

Green Data Centre Design Green Data Centre Design A Holistic Approach Stantec Consulting Ltd. Aleks Milojkovic, P.Eng., RCDD, LEED AP Tommy Chiu, EIT, RCDD, LEED AP STANDARDS ENERGY EQUIPMENT MATERIALS EXAMPLES CONCLUSION STANDARDS

More information

Analysis of data centre cooling energy efficiency

Analysis of data centre cooling energy efficiency Analysis of data centre cooling energy efficiency An analysis of the distribution of energy overheads in the data centre and the relationship between economiser hours and chiller efficiency Liam Newcombe

More information

Re Engineering to a "Green" Data Center, with Measurable ROI

Re Engineering to a Green Data Center, with Measurable ROI Re Engineering to a "Green" Data Center, with Measurable ROI Alan Mamane CEO and Founder Agenda Data Center Energy Trends Benchmarking Efficiency Systematic Approach to Improve Energy Efficiency Best Practices

More information

Measuring Power in your Data Center: The Roadmap to your PUE and Carbon Footprint

Measuring Power in your Data Center: The Roadmap to your PUE and Carbon Footprint Measuring Power in your Data Center: The Roadmap to your PUE and Carbon Footprint Energy usage in the data center Electricity Transformer/ UPS Air 10% Movement 12% Cooling 25% Lighting, etc. 3% IT Equipment

More information

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings WHITE PAPER Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings By Lars Strong, P.E., Upsite Technologies, Inc. Kenneth G. Brill, Upsite Technologies, Inc. 505.798.0200

More information

PROPER SIZING OF IT POWER AND COOLING LOADS WHITE PAPER

PROPER SIZING OF IT POWER AND COOLING LOADS WHITE PAPER WHITE PAPER #23 PROPER SIZING OF IT POWER AND COOLING LOADS WHITE PAPER CONTRIBUTORS: JOHN BEAN, APC RON BEDNAR, EMERSON NETWORK POWER RICHARD JONES, CHATSWORTH PRODUCTS ROBB JONES, CHATSWORTH PRODUCTS

More information

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings WHITE PAPER Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings By Kenneth G. Brill, Upsite Technologies, Inc. Lars Strong, P.E., Upsite Technologies, Inc. 505.798.0200

More information

Data Center Power Consumption

Data Center Power Consumption Data Center Power Consumption A new look at a growing problem Fact - Data center power density up 10x in the last 10 years 2.1 kw/rack (1992); 14 kw/rack (2007) Racks are not fully populated due to power/cooling

More information

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Dealing with Thermal Issues in Data Center Universal Aisle Containment Dealing with Thermal Issues in Data Center Universal Aisle Containment Daniele Tordin BICSI RCDD Technical System Engineer - Panduit Europe [email protected] AGENDA Business Drivers Challenges

More information

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001 Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001 Presented by: Liberty Engineering, LLP 1609 Connecticut Avenue

More information

Statement Of Work. Data Center Power and Cooling Assessment. Service. Professional Services. Table of Contents. 1.0 Executive Summary

Statement Of Work. Data Center Power and Cooling Assessment. Service. Professional Services. Table of Contents. 1.0 Executive Summary Statement Of Work Professional Services Data Center Power and Cooling Assessment Data Center Power and Cooling Assessment Service 1.0 Executive Summary Table of Contents 1.0 Executive Summary 2.0 Features

More information

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings WH I TE PAPE R AisleLok Modular Containment vs. : A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings By Bruce Long, Upsite Technologies, Inc. Lars Strong, P.E., Upsite Technologies,

More information

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009 Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009 Agenda Overview - Network Critical Physical Infrastructure Cooling issues in the Server Room

More information

Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers

Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers Prepared for the U.S. Department of Energy s Federal Energy Management Program Prepared By Lawrence Berkeley National

More information

Reducing the Annual Cost of a Telecommunications Data Center

Reducing the Annual Cost of a Telecommunications Data Center Applied Math Modeling White Paper Reducing the Annual Cost of a Telecommunications Data Center By Paul Bemis and Liz Marshall, Applied Math Modeling Inc., Concord, NH March, 2011 Introduction The facilities

More information

Benefits of. Air Flow Management. Data Center

Benefits of. Air Flow Management. Data Center Benefits of Passive Air Flow Management in the Data Center Learning Objectives At the end of this program, participants will be able to: Readily identify if opportunities i where networking equipment

More information

How To Run A Data Center Efficiently

How To Run A Data Center Efficiently A White Paper from the Experts in Business-Critical Continuity TM Data Center Cooling Assessments What They Can Do for You Executive Summary Managing data centers and IT facilities is becoming increasingly

More information

Rack Hygiene. Data Center White Paper. Executive Summary

Rack Hygiene. Data Center White Paper. Executive Summary Data Center White Paper Rack Hygiene April 14, 2011 Ed Eacueo Data Center Manager Executive Summary This paper describes the concept of Rack Hygiene, which positions the rack as an airflow management device,

More information

Energy Management Services

Energy Management Services February, 2012 Energy Management Services Data centers & IT environments are often critical to today s businesses, directly affecting operations and profitability. General & Mechanical Services has the

More information

Best Practices. for the EU Code of Conduct on Data Centres. Version 1.0.0 First Release Release Public

Best Practices. for the EU Code of Conduct on Data Centres. Version 1.0.0 First Release Release Public Best Practices for the EU Code of Conduct on Data Centres Version 1.0.0 First Release Release Public 1 Introduction This document is a companion to the EU Code of Conduct on Data Centres v0.9. This document

More information

Control of Computer Room Air Handlers Using Wireless Sensors. Energy Efficient Data Center Demonstration Project

Control of Computer Room Air Handlers Using Wireless Sensors. Energy Efficient Data Center Demonstration Project Control of Computer Room Air Handlers Using Wireless Sensors Energy Efficient Data Center Demonstration Project About the Energy Efficient Data Center Demonstration Project The project s goal is to identify

More information

Reducing Data Center Loads for a Large-Scale, Net Zero Office Building

Reducing Data Center Loads for a Large-Scale, Net Zero Office Building rsed Energy Efficiency & Renewable Energy FEDERAL ENERGY MANAGEMENT PROGRAM Reducing Data Center Loads for a Large-Scale, Net Zero Office Building Energy Efficiency & Renewable Energy Executive summary

More information

GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY. Public Sector ICT Special Working Group

GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY. Public Sector ICT Special Working Group GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY Public Sector ICT Special Working Group SERVER ROOM ENERGY EFFICIENCY This guide is one of a suite of documents that aims to provide guidance on ICT energy efficiency.

More information

How to Build a Data Centre Cooling Budget. Ian Cathcart

How to Build a Data Centre Cooling Budget. Ian Cathcart How to Build a Data Centre Cooling Budget Ian Cathcart Chatsworth Products Topics We ll Cover Availability objectives Space and Load planning Equipment and design options Using CFD to evaluate options

More information

Data Center Cooling & Air Flow Management. Arnold Murphy, CDCEP, CDCAP March 3, 2015

Data Center Cooling & Air Flow Management. Arnold Murphy, CDCEP, CDCAP March 3, 2015 Data Center Cooling & Air Flow Management Arnold Murphy, CDCEP, CDCAP March 3, 2015 Strategic Clean Technology Inc Focus on improving cooling and air flow management to achieve energy cost savings and

More information

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Prepared for the U.S. Department of Energy Federal Energy Management Program By Lawrence Berkeley National Laboratory

More information

Power and Cooling for Ultra-High Density Racks and Blade Servers

Power and Cooling for Ultra-High Density Racks and Blade Servers Power and Cooling for Ultra-High Density Racks and Blade Servers White Paper #46 Introduction The Problem Average rack in a typical data center is under 2 kw Dense deployment of blade servers (10-20 kw

More information

Sealing Gaps Under IT Racks: CFD Analysis Reveals Significant Savings Potential

Sealing Gaps Under IT Racks: CFD Analysis Reveals Significant Savings Potential TECHNICAL REPORT Sealing Gaps Under IT Racks: CFD Analysis Reveals Significant Savings Potential By Lars Strong, P.E., Upsite Technologies, Inc. Bruce Long, Upsite Technologies, Inc. +1.888.982.7800 upsite.com

More information

IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT

IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT DATA CENTER RESOURCES WHITE PAPER IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT BY: STEVE HAMBRUCH EXECUTIVE SUMMARY Data centers have experienced explosive growth in the last decade.

More information

Data Center Industry Leaders Reach Agreement on Guiding Principles for Energy Efficiency Metrics

Data Center Industry Leaders Reach Agreement on Guiding Principles for Energy Efficiency Metrics On January 13, 2010, 7x24 Exchange Chairman Robert Cassiliano and Vice President David Schirmacher met in Washington, DC with representatives from the EPA, the DOE and 7 leading industry organizations

More information

Increasing Data Center Efficiency through Metering and Monitoring Power Usage

Increasing Data Center Efficiency through Metering and Monitoring Power Usage White Paper Intel Information Technology Computer Manufacturing Data Center Efficiency Increasing Data Center Efficiency through Metering and Monitoring Power Usage To increase data center energy efficiency

More information

Energy Efficiency in New and Existing Data Centers-Where the Opportunities May Lie

Energy Efficiency in New and Existing Data Centers-Where the Opportunities May Lie Energy Efficiency in New and Existing Data Centers-Where the Opportunities May Lie Mukesh Khattar, Energy Director, Oracle National Energy Efficiency Technology Summit, Sep. 26, 2012, Portland, OR Agenda

More information

Data Center 2020: Delivering high density in the Data Center; efficiently and reliably

Data Center 2020: Delivering high density in the Data Center; efficiently and reliably Data Center 2020: Delivering high density in the Data Center; efficiently and reliably March 2011 Powered by Data Center 2020: Delivering high density in the Data Center; efficiently and reliably Review:

More information

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

Unified Physical Infrastructure (UPI) Strategies for Thermal Management Unified Physical Infrastructure (UPI) Strategies for Thermal Management The Importance of Air Sealing Grommets to Improving Smart www.panduit.com WP-04 August 2008 Introduction One of the core issues affecting

More information

How To Improve Energy Efficiency In A Data Center

How To Improve Energy Efficiency In A Data Center Google s Green Data Centers: Network POP Case Study Table of Contents Introduction... 2 Best practices: Measuring. performance, optimizing air flow,. and turning up the thermostat... 2...Best Practice

More information

Data Center Design Guide featuring Water-Side Economizer Solutions. with Dynamic Economizer Cooling

Data Center Design Guide featuring Water-Side Economizer Solutions. with Dynamic Economizer Cooling Data Center Design Guide featuring Water-Side Economizer Solutions with Dynamic Economizer Cooling Presenter: Jason Koo, P.Eng Sr. Field Applications Engineer STULZ Air Technology Systems jkoo@stulz ats.com

More information

abstract about the GREEn GRiD

abstract about the GREEn GRiD Guidelines for Energy-Efficient Datacenters february 16, 2007 white paper 1 Abstract In this paper, The Green Grid provides a framework for improving the energy efficiency of both new and existing datacenters.

More information

Recommendations for Measuring and Reporting Overall Data Center Efficiency

Recommendations for Measuring and Reporting Overall Data Center Efficiency Recommendations for Measuring and Reporting Overall Data Center Efficiency Version 2 Measuring PUE for Data Centers 17 May 2011 Table of Contents 1 Introduction... 1 1.1 Purpose Recommendations for Measuring

More information

Statement Of Work Professional Services

Statement Of Work Professional Services Statement Of Work Professional Services Data Center Cooling Analysis using CFD Data Center Cooling Analysis Using Computational Fluid Dynamics Service 1.0 Executive Summary Table of Contents 1.0 Executive

More information

AIA Provider: Colorado Green Building Guild Provider Number: 50111120. Speaker: Geoff Overland

AIA Provider: Colorado Green Building Guild Provider Number: 50111120. Speaker: Geoff Overland AIA Provider: Colorado Green Building Guild Provider Number: 50111120 Office Efficiency: Get Plugged In AIA Course Number: EW10.15.14 Speaker: Geoff Overland Credit(s) earned on completion of this course

More information

Data Centre Energy Efficiency Operating for Optimisation Robert M Pe / Sept. 20, 2012 National Energy Efficiency Conference Singapore

Data Centre Energy Efficiency Operating for Optimisation Robert M Pe / Sept. 20, 2012 National Energy Efficiency Conference Singapore Data Centre Energy Efficiency Operating for Optimisation Robert M Pe / Sept. 20, 2012 National Energy Efficiency Conference Singapore Introduction Agenda Introduction Overview of Data Centres DC Operational

More information

DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences

DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences November 2011 Powered by DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no

More information

The Benefits of Supply Air Temperature Control in the Data Centre

The Benefits of Supply Air Temperature Control in the Data Centre Executive Summary: Controlling the temperature in a data centre is critical to achieving maximum uptime and efficiency, but is it being controlled in the correct place? Whilst data centre layouts have

More information

Analysis of the UNH Data Center Using CFD Modeling

Analysis of the UNH Data Center Using CFD Modeling Applied Math Modeling White Paper Analysis of the UNH Data Center Using CFD Modeling By Jamie Bemis, Dana Etherington, and Mike Osienski, Department of Mechanical Engineering, University of New Hampshire,

More information

Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions

Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions Free Cooling in Data Centers John Speck, RCDD, DCDC JFC Solutions Why this topic Many data center projects or retrofits do not have a comprehensive analyses of systems power consumption completed in the

More information

Data Center Components Overview

Data Center Components Overview Data Center Components Overview Power Power Outside Transformer Takes grid power and transforms it from 113KV to 480V Utility (grid) power Supply of high voltage power to the Data Center Electrical Room

More information

Managing Cooling Capacity & Redundancy In Data Centers Today

Managing Cooling Capacity & Redundancy In Data Centers Today Managing Cooling Capacity & Redundancy In Data Centers Today About AdaptivCOOL 15+ Years Thermal & Airflow Expertise Global Presence U.S., India, Japan, China Standards & Compliances: ISO 9001:2008 RoHS

More information

- White Paper - Data Centre Cooling. Best Practice

- White Paper - Data Centre Cooling. Best Practice - White Paper - Data Centre Cooling Best Practice Release 2, April 2008 Contents INTRODUCTION... 3 1. AIR FLOW LEAKAGE... 3 2. PERFORATED TILES: NUMBER AND OPENING FACTOR... 4 3. PERFORATED TILES: WITH

More information

Overview of Green Energy Strategies and Techniques for Modern Data Centers

Overview of Green Energy Strategies and Techniques for Modern Data Centers Overview of Green Energy Strategies and Techniques for Modern Data Centers Introduction Data centers ensure the operation of critical business IT equipment including servers, networking and storage devices.

More information

Strategies for Deploying Blade Servers in Existing Data Centers

Strategies for Deploying Blade Servers in Existing Data Centers Strategies for Deploying Blade Servers in Existing Data Centers By Neil Rasmussen White Paper #125 Revision 1 Executive Summary When blade servers are densely packed, they can exceed the power and cooling

More information

Greening Commercial Data Centres

Greening Commercial Data Centres Greening Commercial Data Centres Fresh air cooling giving a PUE of 1.2 in a colocation environment Greater efficiency and greater resilience Adjustable Overhead Supply allows variation in rack cooling

More information

APC APPLICATION NOTE #112

APC APPLICATION NOTE #112 #112 Best Practices for Deploying the InfraStruXure InRow SC By David Roden Abstract The InfraStruXure InRow SC (ACSC100 and ACSC101) is a self-contained air conditioner for server rooms and wiring closets.

More information

Recommendations for Measuring and Reporting Overall Data Center Efficiency

Recommendations for Measuring and Reporting Overall Data Center Efficiency Recommendations for Measuring and Reporting Overall Data Center Efficiency Version 1 Measuring PUE at Dedicated Data Centers 15 July 2010 Table of Contents 1 Introduction... 1 1.1 Purpose Recommendations

More information

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers Deploying a Vertical Exhaust System www.panduit.com WP-09 September 2009 Introduction Business management applications and rich

More information

DataCenter 2020: first results for energy-optimization at existing data centers

DataCenter 2020: first results for energy-optimization at existing data centers DataCenter : first results for energy-optimization at existing data centers July Powered by WHITE PAPER: DataCenter DataCenter : first results for energy-optimization at existing data centers Introduction

More information

Data Center Energy Profiler Questions Checklist

Data Center Energy Profiler Questions Checklist Data Center Energy Profiler Questions Checklist Step 1 Case Name Date Center Company State/Region County Floor Area Data Center Space Floor Area Non Data Center Space Floor Area Data Center Support Space

More information

Data Centers: How Does It Affect My Building s Energy Use and What Can I Do?

Data Centers: How Does It Affect My Building s Energy Use and What Can I Do? Data Centers: How Does It Affect My Building s Energy Use and What Can I Do? 1 Thank you for attending today s session! Please let us know your name and/or location when you sign in We ask everyone to

More information

Top 5 Trends in Data Center Energy Efficiency

Top 5 Trends in Data Center Energy Efficiency Top 5 Trends in Data Center Energy Efficiency By Todd Boucher, Principal Leading Edge Design Group 603.632.4507 @ledesigngroup Copyright 2012 Leading Edge Design Group www.ledesigngroup.com 1 In 2007,

More information

Intelligent Containment Systems

Intelligent Containment Systems Intelligent Containment Systems COOL Maximize cooling efficiency and easily maintain a perfectly controlled temperature. Air Management Systems Opengate solutions offer sophisticated airflow management

More information

W H I T E P A P E R. Computational Fluid Dynamics Modeling for Operational Data Centers

W H I T E P A P E R. Computational Fluid Dynamics Modeling for Operational Data Centers W H I T E P A P E R Computational Fluid Dynamics Modeling for Operational Data Centers 2 Executive Summary Improving Effectiveness of CFD Technology in Cooling of Data Centers IT managers continue to be

More information

THE GREEN GRID METRICS: DATA CENTER INFRASTRUCTURE EFFICIENCY (DCIE) DETAILED ANALYSIS

THE GREEN GRID METRICS: DATA CENTER INFRASTRUCTURE EFFICIENCY (DCIE) DETAILED ANALYSIS WHITE PAPER #14 THE GREEN GRID METRICS: DATA CENTER INFRASTRUCTURE EFFICIENCY (DCIE) DETAILED ANALYSIS EDITOR: GARY VERDUN, DELL CONTRIBUTORS: DAN AZEVEDO, SYMANTEC HUGH BARRASS, CISCO STEPHEN BERARD,

More information

How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems

How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems Paul Mathew, Ph.D., Staff Scientist Steve Greenberg, P.E., Energy Management Engineer

More information

Using thermal mapping at the data centre

Using thermal mapping at the data centre Technical Using thermal mapping at the data centre by Gregory Stockton, CompuScanIR.com Cooling the servers in data centres efficiently is critical to increasing IT reliability, maintaining high performance,

More information

The Efficient Enterprise. Juan Carlos Londoño Data Center Projects Engineer APC by Schneider Electric

The Efficient Enterprise. Juan Carlos Londoño Data Center Projects Engineer APC by Schneider Electric Ee The Efficient Enterprise Juan Carlos Londoño Data Center Projects Engineer APC by Schneider Electric Keystrokes Kilowatts Heat OUT Electricity IN Need for bandwidth exploding Going hyperbolic! 30,000

More information

DATA CENTER ASSESSMENT SERVICES

DATA CENTER ASSESSMENT SERVICES DATA CENTER ASSESSMENT SERVICES ig2 Group Inc. is a Data Center Science Company which provides solutions that can help organizations automatically optimize their Data Center performance, from assets to

More information

Environmental Data Center Management and Monitoring

Environmental Data Center Management and Monitoring 2013 Raritan Inc. Table of Contents Introduction Page 3 Sensor Design Considerations Page 3 Temperature and Humidity Sensors Page 4 Airflow Sensor Page 6 Differential Air Pressure Sensor Page 6 Water Sensor

More information

Managing Power Usage with Energy Efficiency Metrics: The Available Me...

Managing Power Usage with Energy Efficiency Metrics: The Available Me... 1 of 5 9/1/2011 1:19 PM AUG 2011 Managing Power Usage with Energy Efficiency Metrics: The Available Metrics and How to Use Them Rate this item (1 Vote) font size Data centers consume an enormous amount

More information

Metrics for Data Centre Efficiency

Metrics for Data Centre Efficiency Technology Paper 004 Metrics for Data Centre Efficiency Workspace Technology Limited Technology House, PO BOX 11578, Sutton Coldfield West Midlands B72 1ZB Telephone: 0121 354 4894 Facsimile: 0121 354

More information

Managing Data Centre Heat Issues

Managing Data Centre Heat Issues Managing Data Centre Heat Issues Victor Banuelos Field Applications Engineer Chatsworth Products, Inc. 2010 Managing Data Centre Heat Issues Thermal trends in the data centre Hot Aisle / Cold Aisle design

More information

Data center lifecycle and energy efficiency

Data center lifecycle and energy efficiency Data center lifecycle and energy efficiency White Paper Lifecycle management, thermal management, and simulation solutions enable data center energy modernization Introduction Data centers are coming under

More information

Specialty Environment Design Mission Critical Facilities

Specialty Environment Design Mission Critical Facilities Brian M. Medina PE Associate Brett M. Griffin PE, LEED AP Vice President Environmental Systems Design, Inc. Mission Critical Facilities Specialty Environment Design Mission Critical Facilities March 25,

More information

Cooling Audit for Identifying Potential Cooling Problems in Data Centers

Cooling Audit for Identifying Potential Cooling Problems in Data Centers Cooling Audit for Identifying Potential Cooling Problems in Data Centers By Kevin Dunlap White Paper #40 Revision 2 Executive Summary The compaction of information technology equipment and simultaneous

More information

Data Centre Cooling Air Performance Metrics

Data Centre Cooling Air Performance Metrics Data Centre Cooling Air Performance Metrics Sophia Flucker CEng MIMechE Ing Dr Robert Tozer MSc MBA PhD CEng MCIBSE MASHRAE Operational Intelligence Ltd. [email protected] Abstract Data centre energy consumption

More information

Air, Fluid Flow, and Thermal Simulation of Data Centers with Autodesk Revit 2013 and Autodesk BIM 360

Air, Fluid Flow, and Thermal Simulation of Data Centers with Autodesk Revit 2013 and Autodesk BIM 360 Autodesk Revit 2013 Autodesk BIM 360 Air, Fluid Flow, and Thermal Simulation of Data Centers with Autodesk Revit 2013 and Autodesk BIM 360 Data centers consume approximately 200 terawatt hours of energy

More information

Energy and Cost Analysis of Rittal Corporation Liquid Cooled Package

Energy and Cost Analysis of Rittal Corporation Liquid Cooled Package Energy and Cost Analysis of Rittal Corporation Liquid Cooled Package Munther Salim, Ph.D. Yury Lui, PE, CEM, LEED AP eyp mission critical facilities, 200 west adams street, suite 2750, Chicago, il 60606

More information

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency A White Paper from the Experts in Business-Critical Continuity TM Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency Executive Summary Energy efficiency

More information

Optimum Climate Control For Datacenter - Case Study. T. Prabu March 17 th 2009

Optimum Climate Control For Datacenter - Case Study. T. Prabu March 17 th 2009 Optimum Climate Control For Datacenter - Case Study T. Prabu March 17 th 2009 Agenda 2 About EDEC (Emerson) Facility Data Center Details Design Considerations & Challenges Layout Design CFD Analysis Of

More information

Choosing Close-Coupled IT Cooling Solutions

Choosing Close-Coupled IT Cooling Solutions W H I T E P A P E R Choosing Close-Coupled IT Cooling Solutions Smart Strategies for Small to Mid-Size Data Centers Executive Summary As high-density IT equipment becomes the new normal, the amount of

More information

RiMatrix S Make easy.

RiMatrix S Make easy. RiMatrix S Make easy. 2 RiMatrix S Rittal The System. The whole is more than the sum of its parts. The same is true of Rittal The System. With this in mind, we have bundled our innovative enclosure, power

More information

Energy Efficient High-tech Buildings

Energy Efficient High-tech Buildings Energy Efficient High-tech Buildings Can anything be done to improve Data Center and Cleanroom energy efficiency? Bill Tschudi Lawrence Berkeley National Laboratory [email protected] Acknowledgements California

More information

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions Intel Intelligent Power Management Intel How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions Power and cooling savings through the use of Intel

More information

LEVEL 3 DATA CENTER ASSESSMENT

LEVEL 3 DATA CENTER ASSESSMENT Nationwide Services Corporate Headquarters 410 Forest Street Marlborough, MA 01752 USA Tel: 800-342-5332 Fax: 508-303-0579 www.eecnet.com LEVEL 3 DATA CENTER ASSESSMENT Submitted by: Electronic Environments

More information

Optimizing Power Distribution for High-Density Computing

Optimizing Power Distribution for High-Density Computing Optimizing Power Distribution for High-Density Computing Choosing the right power distribution units for today and preparing for the future By Michael Camesano Product Manager Eaton Corporation Executive

More information

A Comparative Study of Various High Density Data Center Cooling Technologies. A Thesis Presented. Kwok Wu. The Graduate School

A Comparative Study of Various High Density Data Center Cooling Technologies. A Thesis Presented. Kwok Wu. The Graduate School A Comparative Study of Various High Density Data Center Cooling Technologies A Thesis Presented by Kwok Wu to The Graduate School in Partial Fulfillment of the Requirements for the Degree of Master of

More information

SURVEY RESULTS: DATA CENTER ECONOMIZER USE

SURVEY RESULTS: DATA CENTER ECONOMIZER USE WHITE PAPER #41 SURVEY RESULTS: DATA CENTER ECONOMIZER USE EDITOR: Jessica Kaiser, Emerson Network Power CONTRIBUTORS: John Bean, Schneider Electric Tom Harvey, Emerson Network Power Michael Patterson,

More information

Design Best Practices for Data Centers

Design Best Practices for Data Centers Tuesday, 22 September 2009 Design Best Practices for Data Centers Written by Mark Welte Tuesday, 22 September 2009 The data center industry is going through revolutionary changes, due to changing market

More information

Data Center Rack Level Cooling Utilizing Water-Cooled, Passive Rear Door Heat Exchangers (RDHx) as a Cost Effective Alternative to CRAH Air Cooling

Data Center Rack Level Cooling Utilizing Water-Cooled, Passive Rear Door Heat Exchangers (RDHx) as a Cost Effective Alternative to CRAH Air Cooling Data Center Rack Level Cooling Utilizing Water-Cooled, Passive Rear Door Heat Exchangers (RDHx) as a Cost Effective Alternative to CRAH Air Cooling Joshua Grimshaw Director of Engineering, Nova Corporation

More information

BCA-IDA Green Mark for Existing Data Centres Version EDC/1.0

BCA-IDA Green Mark for Existing Data Centres Version EDC/1.0 BCA-IDA Green Mark for Existing Data Centres Version EDC/1.0 To achieve GREEN MARK Award Pre-requisite Requirement All relevant pre-requisite requirements for the specific Green Mark Rating are to be complied

More information

EU Code of Conduct for Data Centre Efficiency

EU Code of Conduct for Data Centre Efficiency EU Code of Conduct for Data Centre Efficiency A Green Practice Accredited by British Computer Society (BCS) EU Code of Conduct for Data Centre Efficiency Local Organizer Practices in Data Centre Industry

More information