Research Publication Date: 10 September 2010 ID Number: G00201044 What to Consider When Designing Next-Generation Data Centers David J. Cappuccio Leading-edge data centers are designed for flexibility, efficiency and scalability. Using best practices in design, IT leaders can reduce both capital and operating expenses for new and existing data centers. Key Findings Efficient energy consumption is a key design criterion in emerging data centers. The use of a single availability level for new data centers is no longer a best practice, but rather designing in multiple tiers to support differing application goals. Mixed cooling techniques for differing workload types are becoming standard practice in new designs. Recommendations Design individual pods to support near-term needs, avoiding overprovisioning but with the ability to scale at a later point. Focus on maximum scalability in the smallest space to gain the highest equipment utilization possible. Always use outside (free) air cooling where appropriate, and alternative design techniques to gain maximum energy efficiency. 2010 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This publication may not be reproduced or distributed in any form without Gartner's prior written permission. The information contained in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This publication consists of the opinions of Gartner's research organization and should not be construed as statements of fact. The opinions expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner's Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see "Guiding Principles on Independence and Objectivity" on its website, http://www.gartner.com/technology/about/ombudsman/omb_guide2.jsp
ANALYSIS What Do You Really Need in a Data Center Design? This may seem an odd question to start with but, in fact, it gets to the core of the issue in data center builds today companies are building data centers using old standards, old methodologies and old designs. If your first answer to this question was square footage, then you're looking at the design from the wrong perspective. Leading-edge data centers today generally need three things: The ability to support high-density growth in computing for an acceptable span of time, at acceptable levels of risk and at acceptable costs The ability in design to support both the scaling out and scaling up of IT resources, depending on the needs of the business A design that accomplishes the first two in the most energy-efficient manner possible All planning and design discussions should begin with these three objectives in mind. What Are the Critical Design Considerations and Best Practices in Emerging Data Centers? Historical design principles for data centers were simple figure out what you have now, estimate growth for 15 to 20 years, then build to suit. Newly built data centers often opened with huge areas of pristine white floor space, fully powered and backed up by an uninterruptible power supply (UPS), water- and air-cooled, and mostly empty. With the cost of mechanical and electrical equipment, as well as the price of power, this model no longer works. If you need 9,000 square feet during the life span of a data center, then design the site to support it, but only build out what you need for the next five to seven years. This modular approach in which the populated floor space may only be 5,000 feet initially, fully supported by power, UPS, chillers and generators, with an additional 4,000 feet left as a slab or built out to the absolute minimum requirements (essentially a shell) is becoming a best practice. From a cost-to-build perspective, this represents a radical drop in upfront capital expenditure. Based on our own data center construction cost models, a standard 9,000-square-foot, Tier 3 data center that will support 150 watts per square foot will cost approximately $27.4 million, with an annual electrical expenditure of $1.02 million (assuming 50% use and $0.09 cents per kilowatt hour). A 4,000-square-foot-pod approach would reduce the build cost to $12.1 million upfront, with an annual electrical cost of $395,000. This pod would support up to 133 standard racks with 2,100 2u-sized servers (assuming 80% rack density). Even in a modestly virtualized environment, the capacity would exceed 18,000 potential images. What Are Today's Data Center Design Trends? The most important considerations for data centers revolve around the rapid growth in highdensity computing, coupled with the rapid growth in energy consumption. Traditionally, organizations would mitigate the power and cooling issues in data centers by spreading out the physical infrastructure across a larger floor space. While server racks could theoretically accommodate up to 42 (1u) servers, the industry average utilization for racks is between 40% and 60% (and between eight and 13 2u servers) per rack. This trend is causing major issues as more and more servers are needed, and floor space is becoming a premium, forcing companies to more densely populate existing racks, thus driving an increase in localized power and cooling Publication Date: 10 September 2010/ID Number: G00201044 Page 2 of 5
demand. Add to this the increasing density of cores per socket that new servers have and that the number of server instances and virtual machines running per rack is doubling every 18 months the overall compute capacity of racks is, therefore, increasing at a much faster rate than the densities, and subsequent power and cooling issues will not be far behind. Energy Consumption Energy consumption will be the most dominant trend in data centers during the next five years, both from an efficiency standpoint (How do I reduce consumption?) and a monitoring/management standpoint (How do I ensure the absolute most-efficient use of my resources?). Reduction in energy consumption will take on many forms, from introducing green technologies, chilled water or refrigerant cooling at the device level, to data center infrastructure management (DCIM) tools, potentially allowing the movement of resources based on workloads and time of day. With the expected regulatory involvement in data center efficiencies, IT and facilities managers will be required to show continuous improvements in how resources are utilized. The trend toward higher-density cabinets and racks will continue, increasing the density of compute resources on the data center floor, and the density of both the power and cooling required to support them. For the past few years, IT managers have focused solely on solving the power and cooling issues with hot and cold aisles, distributed equipment placement, specialty cooling, raising the temperature of data centers, and self-contained environments. Moving forward, the issue will move up the corporate food chain, as executives realize that the substantial energy costs for IT today are but a fraction of what future costs could be at our current growth rates. At current pricing, the operating expense (energy) to support an x86 server will exceed the cost of that server within three years. However, the current generation of x86 servers utilizes significantly less power than older-generation systems, which means that as IT demand grows, a simple swap of older-generation servers for newer ones could have the twofold effect of satisfying that demand, while reducing overall energy consumption and the associated heat generation (see "Now Is the Time: Replace Servers Early and Save Money"). How Much Energy Will I Need? Building to one energy footprint across the data center has been a widely used technique, and is still used by many engineering firms. However, leading-edge data centers today are being built with density zones within the data center to reduce upfront capital costs (mechanical and electrical equipment represents up to 60% of the initial construction costs). Most customers that have considered the density zone approach have found that truly highdensity applications comprise, on average, approximately 10% to 15% of the total, while mediumdensity requirements are about 20%, with the rest of the workload allocated to low density. Think of densities as the average performance levels of racks of servers where some applications require peak performance during specific times, while other applications may be "steady state," never peaking above a lower performance level, but still required to run the business. Different densities require different levels of power and cooling, and rather than design the entire floor area for a single density, it's more cost-effective to vary the loads. Using this design principle on the 4,000-square-foot-pod example from above would yield a lower capital cost point initially as well as operational cost reductions ongoing, since the overall power envelope for the building would be less. Assuming 10% of the floor space was designed to highdensity (200 watts/square foot), 20% was medium-density (150 watts/square foot) and the rest was low- or normal-density (100 watts/square foot), the overall building cost would be $10.6 million (which represents a 14% reduction). At a 50% load, the yearly electrical costs would be $299,000 (a 32% reduction). If the zone requirements changed and more high-density floor space Publication Date: 10 September 2010/ID Number: G00201044 Page 3 of 5
was needed, then scaling up the power distribution unit (PDUs) would be a simple method of increasing power. Adding more on-floor computer room air-conditioning (CRAC) units would help address cooling issues. For larger data centers, a three- or even a four-pod approach would create a living facility, which would be continually updated with the latest power and cooling technologies, ensuring optimal performance levels. During the last phase (for example, the fourth pod), plans could begin to retrofit the first pod with newer technologies, thus beginning an evolutionary cycle for the facility, and extending its useful life cycle even further. How Much Availability Do I Really Need? The latest design change to emerge is around the idea of data center tiers. "Tier" has become a common industry term used to define the expected unplanned downtime for a given data center. This downtime is a factor of design principles coupled with equipment redundancy levels. In general terms, Tier 1 can be considered a server room, and can have an average of 28 hours of unplanned downtime per year. Tier 2 designs assume no more than 22 hours per year downtime, while Tier 3 is only 1.6 hours, and Tier 4 users can expect no more than 0.4 hours per year. Each of these tiers adds greater layers of redundancy and failover systems, and, as such, adds to the overall construction and operational costs. For many years, IT managers have looked to build data centers with a high enough tier rating to support their most mission-critical systems with the highest levels of availability, which has created situations where all applications became highly available, just as an offshoot of running in the same data center. The fact that the data center was very costly in order to support high availability was seen as the cost of doing business. While this may have been a nice side effect of design principals in prior years, when looked at prudently, the obvious question emerges: Is there a better way to design data centers? The simple answer is yes, there is. Gartner has developed a data center cost model that focuses on projected development costs for a data center. These include all major components, such as the building shell; raised floor; environmental, electrical, mechanical and power requirements and consumption; and projected tier level. Using this model, we have developed scenarios for different data center types so that our clients can see the financial differences between design options, prior to engaging with an engineering firm for the final designs. Historically, these data center types were fairly simple Tier 1 through Tier 4, and a specific square footage and power envelope that many would consider the traditional method of data center design. During the past year, we have begun pricing multizoned data centers, putting different power density levels, or zones, within the same data center. This multizoned approach provides a method of reducing capital expense, as the electrical/mechanical equipment required is often reduced, while still maintaining performance scalability in the data center. The basic idea is that different application types will require different levels of growth and power (density) with their servers (as we know, all applications are not created equal), and, therefore, designing floor space to support the same density differences makes sense. What About Free Cooling? The last major trend we are seeing is the introduction of free cooling whenever possible through the use of air-side economizers (or, in some cases, water-side economizers). Simply put, this is a method of using outside air as much as possible to cool the data center (or water supply), rather than always using mechanically produced cold air. Although economizers have been used for many years in manufacturing, their use in data centers has begun only recently, with the new focus on energy conservation. From a quick analysis perspective, designers need to ask Publication Date: 10 September 2010/ID Number: G00201044 Page 4 of 5
themselves: If the inlet air temperature for a data center cooling system is 62 F to 64 F, how many nights each year is the outside air near or below this temperature? How many days is it near or below this temperature? In many areas around the world, this number far exceeds 50%, which means that at least half of the mechanically produced cold air can be brought in from the outside, with a little filtering, substantially reducing energy costs. An added benefit for cold zones is that very cold air will need to be brought up to optimal temperatures and this can be accomplished by mixing exhaust hot air from those racks into the input air from outside, essentially using your own waste heat to normalize the outside input air. From a green IT, energy conservation and corporate social responsibility perspective, what could be better? This research is part of a set of related research pieces. See "Sustainability for Growth: A Supply Chain and IT Transformation" for an overview. REGIONAL HEADQUARTERS Corporate Headquarters 56 Top Gallant Road Stamford, CT 06902-7700 U.S.A. +1 203 964 0096 European Headquarters Tamesis The Glanty Egham Surrey, TW20 9AW UNITED KINGDOM +44 1784 431611 Asia/Pacific Headquarters Gartner Australasia Pty. Ltd. Level 9, 141 Walker Street North Sydney New South Wales 2060 AUSTRALIA +61 2 9459 4600 Japan Headquarters Gartner Japan Ltd. Aobadai Hills, 6F 7-7, Aobadai, 4-chome Meguro-ku, Tokyo 153-0042 JAPAN +81 3 3481 3670 Latin America Headquarters Gartner do Brazil Av. das Nações Unidas, 12551 9 andar World Trade Center 04578-903 São Paulo SP BRAZIL +55 11 3443 1509 Publication Date: 10 September 2010/ID Number: G00201044 Page 5 of 5