Can Your Data Center Infrastructure Handle the Heat of High Density? 5 Ways to Test Your Data Center Infrastructure before Installing Blades



Similar documents
EXECUTIVE REPORT. Can Your Data Center Infrastructure Handle the Heat of High Density?

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

Managing Cooling Capacity & Redundancy In Data Centers Today

CIOs Rising in Performance Needs

Energy Management Services

HPC TCO: Cooling and Computer Room Efficiency

EMC PERSPECTIVE Managing Energy Efficiency in the Data Center

Enabling an agile Data Centre in a (Fr)agile market

Power and Cooling for Ultra-High Density Racks and Blade Servers

High-performance. High-density. Highly committed. Built for tomorrow. Ready today.

Design Best Practices for Data Centers

Optimizing Power Distribution for High-Density Computing

High-performance. High-density. Highly committed.

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

BRUNS-PAK Presents MARK S. EVANKO, Principal

Benefits of. Air Flow Management. Data Center

How to Select a Colocation Provider Offering High Performance Computing

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

Managing Today s Data Centers Avoiding the Impending Crisis

CYRUSONE COMPANY OVERVIEW BUILT FOR TOMORROW. READY TODAY.

AEGIS DATA CENTER SERVICES POWER AND COOLING ANALYSIS SERVICE SUMMARY

Data Center Equipment Power Trends

How To Improve Energy Efficiency In A Data Center

High Performance Computing (HPC) Solutions in High Density Data Centers

Strategies for Deploying Blade Servers in Existing Data Centers

Blade Servers and Beyond: Adaptive Cooling for the Next Generation of IT Systems. A White Paper from the Experts in Business-Critical Continuity

The New Data Center Cooling Paradigm The Tiered Approach

Data center upgrade proposal. (phase one)

CarbonDecisions. The green data centre. Why becoming a green data centre makes good business sense

What to Consider When Designing Next-Generation Data Centers

Power and Cooling Capacity Management for Data Centers

Data Center & IT Infrastructure Optimization. Trends & Best Practices. Mickey Iqbal - IBM Distinguished Engineer. IBM Global Technology Services

abstract about the GREEn GRiD

Avoid Rolling the Dice: Solving the CIO s Data Center Dilemma with Colocation

Increasing Energ y Efficiency In Data Centers

100 Locust Avenue, Berkeley Heights, New Jersey P R O P E R T Y O F F E R I N G

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

Modernizing Vintage Data Centers

Improving Data Center Performance Through Virtualization of SQL Server Databases

Reducing Data Center Energy Consumption

Data Center Optimization: Beware of the Power Density Paradox

Energy Efficiency and Availability Management in Consolidated Data Centers

Interconnection for Financial Services

Virginia Tech. Background. Case Summary. Results

Predictive Data Center Infrastructure Management (DCIM)

Optimum Climate Control For Datacenter - Case Study. T. Prabu March 17 th 2009

Microsoft Technology Center: Philadelphia

HPPR Data Center Improvements Projects Towards Energy Efficiency (CFD Analysis, Monitoring, etc.)

CHICAGO S PREMIERE DATA CENTER

LEVEL 3 DATA CENTER SERVICES IN OMAHA, NEBRASKA HIGH AVAILABILITY ENTERPRISE COMPUTING BUSINESS DEMANDS IT. LEVEL 3 DELIVERS IT.

Greening Commercial Data Centres

Increasing Data Center Efficiency through Metering and Monitoring Power Usage

Colocation Selection Guide. Workspace Technology Ltd

Datacentre Studley. Dedicated managed environment for mission critical services. Six Degrees Group

Alternative data center power

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management.

Adaptive Cooling in Data Centers. Silicon Valley Leadership Group Energy Efficient Data Center Demonstration Project October, 2009

SmartDesign Intelligent, Integrated Infrastructure For The Data Center. Solutions For Business-Critical Continuity

IT White Paper MANAGING EXTREME HEAT: COOLING STRATEGIES FOR HIGH-DENSITY SYSTEMS

Data Centre Outsourcing a Buyer s Guide

THE GREEN DATA CENTER

OPERATOR - FACILITY ADDRESS - DATE

Building a data center. C R Srinivasan Tata Communications

Integrated Infrastructure Solutions. Building a Data Center for the Future

Overview of Green Energy Strategies and Techniques for Modern Data Centers

Data Centers That Deliver Better Results. Bring Your Building Together

How To Power And Cool A Data Center

GREEN FIELD DATA CENTER DESIGN WATER COOLING FOR MAXIMUM EFFICIENCY. Shlomo Novotny, Vice President and Chief Technology Officer, Vette Corp.

Converged Infrastructure Solutions. Building a Data Center for the Future

Data Center Trend: Distributed Power in the White Space

Sanjeev Gupta Product Manager Site and Facilities Services

APC APPLICATION NOTE #92

Green Data Centre Design

Thermal Storage System Provides Emergency Data Center Cooling

TIA-942 Data Centre Standards Overview WHITE PAPER

Data Center of the Future. Is it possible to be Efficient, Reliable and Cost Effective?

SCALABILITY & CAPACITY PLANNING: BUILD VS. BUY

Rittal Liquid Cooling Series

WHITE PAPER Server Innovations: Examining DC Power as an Alternative for Increasing Data Center Efficiency and Reliability

How to Obtain the Uptime, Security and Robust Connectivity Financial Services Firms

Is your data center ready for virtualization?

Specialty Environment Design Mission Critical Facilities

Is an energy-wasting data center draining your bottom line?

Statement Of Work. Data Center Power and Cooling Assessment. Service. Professional Services. Table of Contents. 1.0 Executive Summary

Power Consumption and Cooling in the Data Center: A Survey

HP Data Center Efficiency For The Next Generation. Pete Deacon Canadian Portfolio Manager Power and Cooling Services

Data Centre Infrastructure

IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT

How to Build a Data Centre Cooling Budget. Ian Cathcart

Datacentre London 1. Dedicated managed environment for mission critical services. Six Degrees Group

The following are general terms that we have found being used by tenants, landlords, IT Staff and consultants when discussing facility space.

Thermal Optimisation in the Data Centre

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001

Jan Kremer s Data Center Design Consultancy

A Blueprint for a Smarter Data Center Data Center Services helps you respond to change

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Power to the adaptable data center

Data Center Energy Profiler Questions Checklist

7QUESTIONSYOUNEEDTOASKBEFORE CHOOSINGACOLOCATIONFACILITY FORYOURBUSINESS

Scalability & Capacity Planning: Build vs. Buy

Transcription:

Can Your Data Center Infrastructure Handle the Heat of High Density? 5 Ways to Test Your Data Center Infrastructure before Installing Blades

Can Your Data Center Infrastructure Handle the Heat of High Density? 5 Ways to Test Your Data Center Infrastructure Before Installing Blades By Dan Vazquez, VP of Technology, CyrusOne If you re losing sleep wondering how your company can migrate to high density computing architectures, you re not alone. According to research compiled by Gartner, Deloitte Consulting and other analysts, by 2012 roughly half of existing in-house data centers will be forced to relocate to new facilities, retrofit an existing facility or outsource various high-performance applications. It s a forced migration, for the simple reason that high density computing is no longer a future trend it s here, and it s being used aggressively to enhance business and operations for companies of virtually every size and industry worldwide. According to IDC research data, approximately 75 percent of medium-to-large enterprise companies are in the process of implementing high density computing platforms. By 2012, this number will approach 100 percent. High density computing is today s standard in best-in-class performance and availability, and those that fail to keep pace or fall behind will find themselves no longer competitive in today s markets. Initial considerations re: high density architectures Migrating from a traditional data center to a high density data center requires an understanding of newer protocols, hardware and architectures. We all know that high density data centers consume far more power than traditional centers. In addition to consuming greater power, high density environments also generate more heat. This places greater emphasis on air flow and temperature controls. High density data centers typically use a computational fluid dynamics, or CFD, simulation to engage the entire layout of the data center and run through all the what if scenarios to identify hot spots and potential problem areas. The sub-floor ventilation and perforated tiles are designed and laid out using engineering-driven schematics that maximize output and availability for any emergency overheating situation. Ventilation in a high-density environment requires a minimum three-foot raised floor to accommodate necessary sub-floor equipment. Racks are set up in a hot aisle/cold aisle configuration with dual air flow designed to effectively disperse cold air into the cold rows and to pull hot air off the hot aisles minimizing the mix of cold and hot air. The resulting effect is a highly effective cooling strategy designed to maximize the efficiency of the CRAC units. CFD simulations are also highly effective at identifying the strategic placement of CRAC units to ensure that optimal airflow can be diverted and/or increased to any potential hot zones. CyrusOne, LLC Page 2 of 8

High density/high availability applications are consolidated, hosted and managed primarily on blade servers, which require less physical space than traditional servers and can be condensed into ultra high-performance racks that deliver unprecedented performance. These modular racks take up little space and are relatively easy to update or reconfigure. Blade servers are designed for maximum processing ability, and this increased performance comes at a hefty cost, literally blade servers consume, typically, 10 times more total power than conventional rack servers, and require at least a four-fold increase in cooling capacity. As a result, according to a recent Uptime Institute study, power consumption in data centers has increased sevenfold (over 600 percent) in just seven years. In fact, companies today are confronting dramatic cost increases across the board in terms of building and operating a high density, high availability infrastructure. According to The Uptime Institute, building a Tier 3 or greater facility with 25,000 square feet of raised white floor at 100 watts per sq foot can cost more than $60 million. High density server technologies require 200+ watts per square foot, scaling these costs even further. Beyond the rising building costs, capacity planning is also a challenge for companies. Overbuilding an infrastructure, where white floor space is under-utilized, can be detrimental to the return on the investment. Under-building or poorly retrofitting an infrastructure that is obsolete and doesn t meet user-demand creates additional problems. Assessing your current data infrastructure Although some in-house data centers are currently equipped to manage high density server and storage requirements, the majority are not. In fact, most data centers have an inherent probability of failure by design of 25-50 percent within 5 years. Advances in data center technology and high density performance/scalability have emerged and matured so quickly that many in-house infrastructures are obsolete, or dangerously close to being so. CIOs and IT managers working with traditional data centers are faced with three choices: Build a new data center; Retrofit an existing data center; Utilize a Colocation vendor. To make the best decision, a detailed functional analysis is required. The essential first step to understanding your infrastructure s ability to handle high density requirements is to perform a thorough comparative evaluation. This can be done by assessing your current data center environment, then comparing it to a similarly-sized, fully-equipped high density data center. CyrusOne, LLC Page 3 of 8

During this process, you will need to address five key areas that directly impact both your data center infrastructure and business as a whole: 1. Identify the weakest points in your infrastructure and focus there. 2. Understand the risks associated with an inadequate infrastructure. 3. Research and document all aspects of the risks you ve identified. 4. Work from the inside out. Start with internal processes and structural changes that can be made and work toward external solutions or products that you may need to integrate in order to make your migration plan successful. 5. Prioritize your solutions. Use your cost of downtime as a way to prioritize and justify changes and investments. Side-by-side comparison As we can see from Figure 1, migrating from a traditional data infrastructure to a high density environment at an industry-typical 5,000 square feet requires changes and considerations across the board. Square footage within the data center becomes reprioritized, power and cooling requirements increase, redundancy protocols become paramount and additional mechanical/yard space is needed. Existing Data Center Existing Space Expanded to 250 Watts/Foot Notes 5000 Sq ft 5000 Sq Ft You will loose 948 useable feet because of adding PDUs and CRACs 100 Watts/Foot 250 Watts/Foot 2N Design 2N Design 6-225 KVA PDUs 14-225 PDUs Adding 8-225 KVA PDUs 2-625 KVA UPSs 2-625 KVA UPSs Adding 2-800 KVA UPSs * Must have Mechanical Space 2-800 KVA UPSs 2 - Switch Gear 4 - Switch Gear Adding 2 Switch Gear * Must have Mechanical Space 2 - ATSs 4 - ATS Adding 2 ATS* Must have either Mechanical Space or Mechanical Yard Space 1- Fuel Tank 2 - Fuel Tank Adding one fuel tank * Must have Mechanical Yard Space 2-1.25 meg Gens 2-1.25 Meg Gens Adding 2-2 Meg Gens * Must have Mechanical Yard Space 2-2.00 Meg Gens 2-250 ton Air Chillers 2-250 ton Air Chillers Adding 2-400 ton air chillers * Must have Mechanical Yard Space 2-400 ton Air Chillers 7-30 ton CRACs 17-30 ton CRACs Adding 10-30 Ton CRAC Units * Must have 3 foot raised floor to go to 250 Watts a Foot. * Must have appropriate water loop pipe size. Square footage Many IT managers worry about the rapid loss of square footage in the data center, as new equipment is added to the floor. In a high density environment, some square footage is conserved or even reduced through the use of blade servers, which operate at far higher densities than conventional servers and storage equipment. However, high density architectures require CyrusOne, LLC Page 4 of 8

additional PDUs and CRAC units, thus offsetting these gains. In a typical conversion, you will sacrifice nearly 1,000 square feet through the addition of power and infrastructure hardware necessary to manage the power and cooling requirements of a high density environment. Keep in mind, however, that properly-installed high density architectures maximize every available square foot and virtually eliminate the need for surplus white space to accommodate future growth. Scalability is achieved through various protocols that leverage existing high density hardware including virtualization and cross-platform optimization. Increased Watts Per Square Foot Traditional data centers operate at approximately 100 Watts per square foot. High density infrastructures require an average of 250 Watts per foot. This means more power, more cooling, greater fault tolerance and higher operating expenses. For example, high density blade servers operate at 50 amps, a major increase over previous server generations that typically ran at 20 amps. As a general rule of thumb it takes $1 (of utility power) to cool every dollar in direct consumption in a high density data center. 2N Design Historically, not all high availability applications required a high density computing environment. Also not all high density applications required a high availability platform. In its earliest form, dense computing was used only by seismic processing and reservoir modeling applications requiring an N+1 power infrastructure, at best. Although these systems required more power and cooling, it was acceptable for them to be restarted in the event they overheated or in the event power was distributed. Today s high density infrastructures require full 2N redundancy supported by expansion of uninterruptible power supply (UPS), power distribution units (PDU) and automatic transfer switches (ATS). That s because the concept of high availability/high density computing is engineered specifically to accommodate always on applications that require robust processor speeds and vast data storage capabilities. With a 2N protocol in place, the impact of system failure is virtually eliminated, as each server or storage device is configured with A/B backup availability at every point of risk within the data center (ie: power, connectivity, hardware). That means if the power fails or a cooling unit falters, a reserve generator or unit picks up immediately at the moment of failure, with zero interruption. If a server fails or overheats, redundancy protocols transfer the workload to an alternate server, ensuring that business continuity is achieved with zero downtime. In many CyrusOne, LLC Page 5 of 8

enterprise-scale ERP models, applications must be deployed with maximum (ie: permanent) availability for thousands of users across global geographies. Only carefully configured high density computing and data center environments can deliver this level of global availability and productivity. Mechanical and Yard space Perhaps the most significant upgrade for most data centers is the upgrade in equipment to support increased power and cooling requirements and related backup redundancy measures. Additional mechanical and/or external yard space is needed to support the increase in heavy hardware. This includes doubling the number of many existing components including UPS units, switch gear, ATS units, fuel tanks, high-powered generators, and air chillers. Additionally, the number of PDUs and CRAC units increase by a factor of two. In many instances, the additional hardware is higher powered than existing units, translating into even greater power and cooling costs. The costs associated with these upgrades can reach seven figures. Making the Right Decision Once your assessment is complete, you will have a better understanding of your existing infrastructure and whether or not expansion or retrofitting is viable. For most data centers, it isn t an option high build costs, restricted mechanical/yard space and increased power and cooling burdens present huge obstacles for most. Another related issue is the fluctuation of utilities costs. While infrastructure and design challenges can be anticipated with near certainty, power and cooling requirements cannot. The principal challenge moving forward is to develop data center models that can mitigate these increasing costs. IDC anticipates that annual cooling and power costs could exceed 70 percent of high density computing expenses, far outweighing the purchase cost of hardware and components. This places an even greater premium on data center design that optimizes power and cooling distribution with maximum utility efficiency. High density infrastructure is essential to running high density applications, thus these costs are unavoidable and must be approached with new protocols to minimize the impact on IT budgets. The alternative is no less than the proverbial 800-pound gorilla colocating data and applications with a data hosting provider, typically within a Tier 3 or greater data center that is designed and dedicated for high density requirements. It s an obvious choice for many CIOs and IT managers, as best-in-class hosted data facilities help achieve cost certainty while ensuring maximum CyrusOne, LLC Page 6 of 8

performance and availability of high density applications through future-proof design and planning. For example, CyrusOne s Dallas Technology Center provides 192,000 square feet of data center floor space with 85 acres for future build-to-suit and footprint expansion. This is more than ample for a wide variety of enterprise companies needs. The data center provides optimal redundancy and scalability, with an on-site electrical substation to provide full 2N redundancy, two independent power grids, three separate substations, two 42 MVA transformers within on-site substations and a tie breaker to further reinforce 2N reliability and seamless productivity. The facility s cooling infrastructure is supported by four 1500-ton Trane chillers and one 650-ton Carrier chiller. Communications capabilities feature unlimited bandwidth capabilities, upgradeable to OC-192. Connectivity is secured through a dual sonet ring powered by AT&T and Verizon. This facility offers the most advanced resources and architecture, presenting a stellar example of best-in-class colocation. Most colocated data centers are expressly designed from the ground up, with deeply integrated infrastructures, dedicated built-in redundancies and abundant external space for installing and expanding power and cooling units. In doing so, they help achieve perhaps the greatest factor of all peace of mind knowing that your data and applications are secure and your business is free to run and grow, unimpeded by technology demands. ### CyrusOne, LLC Page 7 of 8

CyrusOne, LLC Page 8 of 8