EXECUTIVE REPORT. Can Your Data Center Infrastructure Handle the Heat of High Density?



Similar documents
EXECUTIVE REPORT. Scalability & Capacity Planning: Build vs. Buy

How T-Systems Solved Its Data Center Challenges to Offer Dynamic Services to the U.S.

Scalability & Capacity Planning: Build vs. Buy

CASE STUDY. Nirvanix, Inc. Finds a Secure and Reliable Data Center and a Strategic Partner for Cloud Services at CyrusOne

CASE STUDY. CyrusOne Helps City of Houston to Go Green and Reduce Energy Consumption through Data Center Consolidation

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

Avoid Rolling the Dice: Solving the CIO s Data Center Dilemma with Colocation

CYRUSONE COMPANY OVERVIEW BUILT FOR TOMORROW. READY TODAY.

Managing Cooling Capacity & Redundancy In Data Centers Today

EXECUTIVE REPORT. 4 Critical Steps Financial Firms Must Take for IT Uptime, Security, and Connectivity

Energy Management Services

EXECUTIVE REPORT. Why Healthcare Providers Seek Out New Ways To Manage and Utilize Big Data

Design Best Practices for Data Centers

HPC TCO: Cooling and Computer Room Efficiency

Power and Cooling for Ultra-High Density Racks and Blade Servers

Enabling an agile Data Centre in a (Fr)agile market

EXECUTIVE REPORT. 6 Key Considerations for Financial Firms to Improve Security and Scalability of Mobile Banking Services

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

Optimizing Power Distribution for High-Density Computing

Benefits of. Air Flow Management. Data Center

What to Consider When Designing Next-Generation Data Centers

Data Center Equipment Power Trends

Reducing Data Center Energy Consumption

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

Strategies for Deploying Blade Servers in Existing Data Centers

How To Improve Energy Efficiency In A Data Center

BRUNS-PAK Presents MARK S. EVANKO, Principal

Predictive Data Center Infrastructure Management (DCIM)

Greening Commercial Data Centres

Power and Cooling Capacity Management for Data Centers

Converged Infrastructure Solutions. Building a Data Center for the Future

Blade Servers and Beyond: Adaptive Cooling for the Next Generation of IT Systems. A White Paper from the Experts in Business-Critical Continuity

CHICAGO S PREMIERE DATA CENTER

Data Center & IT Infrastructure Optimization. Trends & Best Practices. Mickey Iqbal - IBM Distinguished Engineer. IBM Global Technology Services

AEGIS DATA CENTER SERVICES POWER AND COOLING ANALYSIS SERVICE SUMMARY

Increasing Energ y Efficiency In Data Centers

Energy Efficiency and Availability Management in Consolidated Data Centers

The New Data Center Cooling Paradigm The Tiered Approach

Improving Data Center Performance Through Virtualization of SQL Server Databases

CarbonDecisions. The green data centre. Why becoming a green data centre makes good business sense

Data center upgrade proposal. (phase one)

SmartDesign Intelligent, Integrated Infrastructure For The Data Center. Solutions For Business-Critical Continuity

abstract about the GREEn GRiD

Alternative data center power

Data Centre Outsourcing a Buyer s Guide

Modernizing Vintage Data Centers

Virginia Tech. Background. Case Summary. Results

Colocation Selection Guide. Workspace Technology Ltd

Datacentre Studley. Dedicated managed environment for mission critical services. Six Degrees Group

THE GREEN DATA CENTER

Data Centers That Deliver Better Results. Bring Your Building Together

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

Overview of Green Energy Strategies and Techniques for Modern Data Centers

Microsoft Technology Center: Philadelphia

GREEN FIELD DATA CENTER DESIGN WATER COOLING FOR MAXIMUM EFFICIENCY. Shlomo Novotny, Vice President and Chief Technology Officer, Vette Corp.

100 Locust Avenue, Berkeley Heights, New Jersey P R O P E R T Y O F F E R I N G

Data Center Trend: Distributed Power in the White Space

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

Thermal Storage System Provides Emergency Data Center Cooling

Power Consumption and Cooling in the Data Center: A Survey

Optimum Climate Control For Datacenter - Case Study. T. Prabu March 17 th 2009

APC APPLICATION NOTE #92

Dealing with Thermal Issues in Data Center Universal Aisle Containment

IT White Paper MANAGING EXTREME HEAT: COOLING STRATEGIES FOR HIGH-DENSITY SYSTEMS

Specialty Environment Design Mission Critical Facilities

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Increasing Data Center Efficiency through Metering and Monitoring Power Usage

Using CFD for optimal thermal management and cooling design in data centers

Datacentre London 1. Dedicated managed environment for mission critical services. Six Degrees Group

Is an energy-wasting data center draining your bottom line?

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management.

LEVEL 3 DATA CENTER SERVICES IN OMAHA, NEBRASKA HIGH AVAILABILITY ENTERPRISE COMPUTING BUSINESS DEMANDS IT. LEVEL 3 DELIVERS IT.

OPERATOR - FACILITY ADDRESS - DATE

Data Center Power Consumption

Building a data center. C R Srinivasan Tata Communications

Transcription:

EXECUTIVE REPORT Can Your Data Center Infrastructure Handle the Heat of High Density?

If you re losing sleep wondering how your company can migrate to high density computing architectures, you re not alone. According to research compiled by Gartner, Deloitte Consulting and other analysts, by 2012 roughly half of existing in-house data centers will be forced to relocate to new facilities, retrofit an existing facility or outsource various high-performance applications. It s a forced migration, for the simple reason that high density computing is no longer a future trend it s here, and it s being used aggressively to enhance business and operations for companies of virtually every size and industry worldwide. According to IDC research data, approximately 75 percent of medium-to-large enterprise companies are in the process of implementing high density computing platforms. By 2012, this number will approach 100 percent. High density computing is today s standard in best-in-class performance and availability, and those that fail to keep pace or fall behind will find themselves no longer competitive in today s markets. Initial Considerations re: High Density Architectures Migrating from a traditional data center to a high density data center requires an understanding of newer protocols, hardware and architectures. We all know that high density data centers consume far more power than traditional centers. In addition to consuming greater power, high density environments also generate more heat. This places greater emphasis on air flow and temperature controls. High density data centers typically use a computational fluid dynamics, or CFD, simulation to engage the entire layout of the data center and run through all the what if scenarios to identify hot spots and potential problem areas. The sub-floor ventilation and perforated tiles are designed and laid out using engineering-driven schematics that maximize output and availability for any emergency overheating situation. Ventilation in a high-density environment requires a minimum three-foot raised floor to accommodate necessary sub-floor equipment. Racks are set up in a hot aisle/cold aisle configuration with dual air flow designed to effectively disperse cold air into the cold rows and to pull hot air off the hot aisles minimizing the mix of cold and hot air. The resulting effect is a highly effective cooling strategy designed to maximize the efficiency of the CRAC units. CFD simulations are also highly effective at identifying the strategic placement of CRAC units to ensure that optimal airflow can be diverted and/or increased to any potential hot zones. 2

High density/high availability applications are consolidated, hosted and managed primarily on blade servers, which require less physical space than traditional servers and can be condensed into ultra high-performance racks that deliver unprecedented performance. These modular racks take up little space and are relatively easy to update or reconfigure. Blade servers are designed for maximum processing ability, and this increased performance comes at a hefty cost, literally blade servers consume, typically, 10 times more total power than conventional rack servers, and require at least a four-fold increase in cooling capacity. As a result, according to a recent Uptime Institute study, power consumption in data centers has increased sevenfold (over 600 percent) in just seven years. In fact, companies today are confronting dramatic cost increases across the board in terms of building and operating a high density, high availability infrastructure. According to The Uptime Institute, building a Tier 3 or greater facility with 25,000 square feet of raised white floor at 100 watts per sq foot can cost more than $60 million. High density server technologies require 200+ watts per square foot, scaling these costs even further. Beyond the rising building costs, capacity planning is also a challenge for companies. Overbuilding an infrastructure, where white floor space is under-utilized, can be detrimental to the return on the investment. Underbuilding or poorly retrofitting an infrastructure that is obsolete and doesn t meet user-demand creates additional problems. Assessing Your Current Data Infrastructure Although some in-house data centers are currently equipped to manage high density server and storage requirements, the majority are not. In fact, most data centers have an inherent probability of failure by design of 25-50 percent within 5 years. Advances in data center technology and high density performance/scalability have emerged and matured so quickly that many in-house infrastructures are obsolete, or dangerously close to being so. CIOs and IT managers working with traditional data centers are faced with three choices: Build a new data center; Retrofit an existing data center; Utilize a Colocation vendor. To make the best decision, a detailed functional analysis is required. The essential first step to understanding your infrastructure s ability to handle high density requirements is to perform a thorough comparative evaluation. This can be done by assessing your current data center environment, then comparing it to a similarlysized, fully-equipped high density data center. 3

During this process, you will need to address five key areas that directly impact both your data center infrastructure and business as a whole: 1. Identify the weakest points in your infrastructure and focus there. 2. Understand the risks associated with an inadequate infrastructure. 3. Research and document all aspects of the risks you ve identified. 4. Work from the inside out. Start with internal processes and structural changes that can be made and work toward external solutions or products that you may need to integrate, in order to make your migration plan successful. 5. Prioritize your solutions. Use your cost of downtime as a way to prioritize and justify change and investments. Side-by-side Comparison As we can see from Figure 1, migrating from a traditional data infrastructure to a high density environment at an industry-typical 5,000 square feet requires changes and considerations across the board. Square footage within the data center becomes reprioritized, power and cooling requirements increase, redundancy protocols become paramount and additional mechanical yard space is needed. Existing Data Center Existing Space Expanded to 250 Watts/Foot Notes 5000 Sq ft 5000 Sq Ft You will loose 948 useable feet because of adding PDUs and CRACs 100 Watts/Foot 250 Watts/Foot 2NDesign 2NDesign 6-225 KVAPDUs 14-225 PDUs Adding 8-225KVAPDUs 2-625 KVAPDUs 2-625 KVAUPSs Adding 2-800KVAUPS*Must have Mechanical Space 2-800 KVA UPSs 2 - Switch Gear 4 - Switch Gear Adding 2 Switch Gear * Must have Mechanical Space 2 - ATSs 4 - ATSs Adding 2 ATS * Must have either Mechanical Space or Mechanical Yard Space 1 - Fuel Tank 2 - Fuel Tank Adding one fuel tank * Must have Mechanical Yard Space 2-1.25 meg Gens 2-1.25 Meg Gens Adding 2-2 Meg Gens * Must have Mechanical Yard Space 2-2.00 Meg Gens 2-250 ton Air Chillers 2-250 ton Air Chillers 2-400 ton Air Chillers Adding 2-400 ton air chillers * Must have Mechanical Yard Space * Must have 3 foot raised floor to go to 250 Watts a Foot. * Must have appropriate water loop pipe size. 4

Square Footage Many IT managers worry about the rapid loss of square footage in the data center, as new equipment is added to the floor. In a high density environment, some square footage is conserved or even reduced through the use of blade servers, which operate at far higher densities than conventional servers and storage equipment. However, high density architectures require additional PDUs and CRAC units, thus offsetting these gains. In a typical conversion, you will sacrifice nearly 1,000 square feet through the addition of power and infrastructure hardware necessary to manage the power and cooling requirements of a high density environment. Keep in mind, however, that properly-installed high density architectures maximize every available square foot and virtually eliminate the need for surplus white space to accommodate future growth. Scalability is achieved through various protocols that leverage existing high density hardware including virtualization and cross-platform optimization. Increased Watts per Square Foot Traditional data centers operate at approximately 100 Watts per square foot. High density infrastructures require an average of 250 Watts per foot. This means more power, more cooling, greater fault tolerance and higher operating expenses. For example, high density blade servers operate at 50 amps, a major increase over previous server generations that typically ran at 20 amps. As a general rule of thumb it takes $1 (of utility power) to cool every dollar in direct consumption in a high density data center. 2N Design Historically, not all high availability applications required a high density computing environment. Also not all high density applications required a high availability platform. In its earliest form, dense computing was used only by seismic processing and reservoir modeling applications requiring an N+1 power infrastructure, at best. Although these systems required more power and cooling, it was acceptable for them to be restarted in the event they overheated or in the event power was distributed. 5

Today s high density infrastructures require full 2N redundancy supported by expansion of uninterruptible power supply (UPS), power distribution units (PDU) and automatic transfer switches (ATS). That s because the concept of high availability/high density computing is engineered specifically to accommodate always on applications that require robust processor speeds and vast data storage capabilities. With a 2N protocol in place, the impact of system failure is virtually eliminated, as each server or storage device is configured with A/B backup availability at every point of risk within the data center (ie: power, connectivity, hardware). That means if the power fails or a cooling unit falters, a reserve generator or unit picks up immediately at the moment of failure, with zero interruption. If a server fails or overheats, redundancy protocols transfer the workload to an alternate server, ensuring that business continuity is achieved with zero downtime. In many enterprise-scale ERP models, applications must be deployed with maximum (ie: permanent) availability for thousands of users across global geographies. Only carefully configured high density computing and data center environments can deliver this level of global availability and productivity. Mechanical and Yard Space Perhaps the most significant upgrade for most data centers is the upgrade in equipment to support increased power and cooling requirements and related backup redundancy measures. Additional mechanical and/or external yard space is needed to support the increase in heavy hardware. This includes doubling the number of many existing components including UPS units, switch gear, ATS units, fuel tanks, high-powered generators, and air chillers. Additionally, the number of PDUs and CRAC units increase by a factor of two. In many instances, the additional hardware is higher powered than existing units, translating into even greater power and cooling costs. The costs associated with these upgrades can reach seven figures. Making the Right Decision Once your assessment is complete, you will have a better understanding of your existing infrastructure and whether or not expansion or retrofitting is viable. For most data centers, it isn t an option high build costs, restricted mechanical/yard space and increased power and cooling burdens present huge obstacles for most. 6

Another related issue is the fluctuation of utilities costs. While infrastructure and design challenges can be anticipated with near certainty, power and cooling requirements cannot. The principal challenge moving forward is to develop data center models that can mitigate these increasing costs. IDC anticipates that annual cooling and power costs could exceed 70 percent of high density computing expenses, far outweighing the purchase cost of hardware and components. This places an even greater premium on data center design that optimizes power and cooling distribution with maximum utility efficiency. High density infrastructure is essential to running high density applications, thus these costs are unavoidable and must be approached with new protocols to minimize the impact on IT budgets. The alternative is no less than the proverbial 800-pound gorilla colocating data and applications with a data hosting provider, typically within a Tier 3 or greater data center that is designed and dedicated for high density requirements. It s an obvious choice for many CIOs and IT managers, as best-in-class hosted data facilities help achieve cost certainty while ensuring maximum performance and availability of high density applications through future-proof design and planning. For example, CyrusOne s Dallas Technology Center provides 192,000 square feet of data center floor space with 85 acres for future build-to-suit and footprint expansion. This is more than ample for a wide variety of enterprise companies needs. The data center provides optimal redundancy and scalability, with an on-site electrical substation to provide full 2N redundancy, two independent power grids, three separate substations, two 42 MVA transformers within on-site substations and a tie breaker to further reinforce 2N reliability and seamless productivity. The facility s cooling infrastructure is supported by four 1500-ton Trane chillers and one 650-ton Carrier chiller. Communications capabilities feature unlimited bandwidth capabilities, upgradeable to OC-192. Connectivity is secured through a dual sonet ring powered by AT&T and Verizon. This facility offers the most advanced resources and architecture, presenting a stellar example of best-in-class colocation. Most colocated data centers are expressly designed from the ground up, with deeply integrated infrastructures, dedicated built-in redundancies and abundant external space for installing and expanding power and cooling units. In doing so, they help achieve perhaps the greatest factor of all peace of mind knowing that your data and applications are secure and your business is free to run and grow, unimpeded by technology demands. 7

About CyrusOne With over two dozen data centers across the globe, CyrusOne helps many of the world s largest global businesses including 9 of the global Fortune 20 companies and over 135 of the Fortune 1000 and companies of all sizes take advantage of the latest data center technology and realize top operational efficiencies through: Flexible, Scalable Solutions Receive flexible data center solutions that readily scale to match the needs of your growing business. Proven, Innovative Technology Benefit from the latest data center innovations CyrusOne expert technicians can put to work for your IT environment. Exceptional Service Enjoy personalized, consultative service through all stages of the relationship - design, build, installation, management, and reporting. CyrusOne National IX Offers low-cost metro connectivity and city-to-city transport in an ever growing number of cities across the US. About the Author Scott Brueggeman oversees the management of CyrusOne s global marketing, product development, inside sales, and corporate communications including branding, demand creation, and public relations. His 20 years of marketing and sales experience includes Fortune 50 firms, as well as smaller high-growth companies. Prior to CyrusOne, he spent several years with running marketing at a data center hosting and managed services company, as well as Chief Marketing Officer at PEAK6 Investments, an international financial services firm. Prior to that he was VP Marketing for CareerBuilder, and also held leadership positions at AT&T and PepsiCo. Brueggeman serves on several advisory boards. CyrusOne (855) 564-3198 cyrusone.com 8