How To Power And Cool A Data Center



Similar documents
Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

Benefits of. Air Flow Management. Data Center

Dealing with Thermal Issues in Data Center Universal Aisle Containment

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

Data Center Equipment Power Trends

The Efficient Enterprise. Juan Carlos Londoño Data Center Projects Engineer APC by Schneider Electric

Data Center Technology: Physical Infrastructure

Power and Cooling for Ultra-High Density Racks and Blade Servers

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

Data center upgrade proposal. (phase one)

FEAR Model - Review Cell Module Block

Re Engineering to a "Green" Data Center, with Measurable ROI

Using Simulation to Improve Data Center Efficiency

Green Data Centre: Is There Such A Thing? Dr. T. C. Tan Distinguished Member, CommScope Labs

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series

ADC-APC Integrated Cisco Data Center Solutions

Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED.

High-density modular data center making cloud easier. Frank Zheng Huawei

Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

Strategies for Deploying Blade Servers in Existing Data Centers

Using Simulation to Improve Data Center Efficiency

BRUNS-PAK Presents MARK S. EVANKO, Principal

Data Center Energy Profiler Questions Checklist

Data Center Power Consumption

Energy-efficient & scalable data center infrastructure design

Energy Efficiency and Green Data Centers. Overview of Recommendations ITU-T L.1300 and ITU-T L.1310

Reducing Data Center Energy Consumption

GREEN FIELD DATA CENTER DESIGN WATER COOLING FOR MAXIMUM EFFICIENCY. Shlomo Novotny, Vice President and Chief Technology Officer, Vette Corp.

Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

APC APPLICATION NOTE #92

Elements of Energy Efficiency in Data Centre Cooling Architecture

Sealing Gaps Under IT Racks: CFD Analysis Reveals Significant Savings Potential

Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc.

Managing Data Centre Heat Issues

Introducing UNSW s R1 Data Centre

Airflow Simulation Solves Data Centre Cooling Problem

Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER

IT White Paper MANAGING EXTREME HEAT: COOLING STRATEGIES FOR HIGH-DENSITY SYSTEMS

The New Data Center Cooling Paradigm The Tiered Approach

Cisco Data Center Technology Update

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers

Energy Efficiency and Availability Management in Consolidated Data Centers

APC APPLICATION NOTE #112

abstract about the GREEn GRiD

National Grid Your Partner in Energy Solutions

Cooling Audit for Identifying Potential Cooling Problems in Data Centers

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Great Lakes Data Room Case Study

Cisco Nexus 7000 Series

Managing Cooling Capacity & Redundancy In Data Centers Today

Data Centre Infrastructure Management DCIM. Where it fits with what s going on

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers

2006 APC corporation. Cooling Solutions and Selling Strategies for Wiring Closets and Small IT Rooms

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Data Center Consolidation Trends & Solutions. Bob Miller Vice President, Global Solutions Sales Emerson Network Power / Liebert

Rittal Liquid Cooling Series

Data Center Components Overview

Impact of Virtualization on Data Center Physical Infrastructure White Paper #27. Tom Brey, IBM Operations Work Group

Rack Hygiene. Data Center White Paper. Executive Summary

Utilizing Temperature Monitoring to Increase Datacenter Cooling Efficiency

VISIT 2010 Fujitsu Forum Europe 0

Enabling an agile Data Centre in a (Fr)agile market

SMARTPLAY7. Increase your Deal Size and Profitability with SmartPlays

Large SAN Design Best Practices

Introducing Computational Fluid Dynamics Virtual Facility 6SigmaDC

The art of the possible.

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Server Room Thermal Assessment

Cisco Energy Efficient Data Center Solutions and Best Practices

Overview of Green Energy Strategies and Techniques for Modern Data Centers

Rethinking Data Center Design

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Alan Matzka, P.E., Senior Mechanical Engineer Bradford Consulting Engineers

Unified Infrastructure Monitoring, Access and Control

Presented at ISA Summit - Stockholm, Sweden. March 19, 2009 Dean Nelson Sr. Director, Global Lab & Datacenter Design Services (GDS)

(Examples of various colocation sites)

AIA Provider: Colorado Green Building Guild Provider Number: Speaker: Geoff Overland

New Service Provider Economics with Network Optimization Services. Executive Summary. Key Takeaways

Data centers have been cooled for many years by delivering

Optimum Climate Control For Datacenter - Case Study. T. Prabu March 17 th 2009

Presented at Computing in High Energy & Nuclear Physics

CHICAGO S PREMIERE DATA CENTER

APC by Schneider Electric

Green Data Center Program

Optimizing Power Distribution for High-Density Computing

THE GREEN DATA CENTER

Choosing Close-Coupled IT Cooling Solutions

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

How To Build A Cisco Uniden Computing System

NEC Micro Modular Server Introduction. NEC November 18,2014

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

Small Data / Telecommunications Room on Slab Floor

Building a data center. C R Srinivasan Tata Communications

An Improved Architecture for High-Efficiency, High-Density Data Centers

Scalable. Affordable. Flexible. Fast.

Greening Commercial Data Centres

Transcription:

Architecting the Green Data Center Viktor Hagen Field Business Development Data Center & High Performance Computing 1

Where we are Today Environmental sustainability is a major global trend for the 21st century Corporations are adopting standards and implementing initiatives to improve the environmental performance of their operations Green DC and CCRE can provide Environmental benefits to owners and operators of Corporate and commercial real estate, that also reduces cost and risk 2

Power Consumption Mitigation What does Green Mean to You? Political Blue State, Red State Green Party Regulatory Compliance? Health Trader Joe s Whole Foods Walmart Social Hybrid s Bottled Water Technology Blades Scalable UPS Fuel Cells Economics Cost Centers Selective Accounting YOU? Control Influence 3

Growth Projections what are the experts saying? 2008, 50% of today s data centers will have insufficient power and cooling capacity to meet the demands of high-density equipment Through 2009, energy costs will emerge as the second-highest operating cost (behind labor) in 70% of DC facilities WW 2011, power demand for high-density equipment will level off or decline 2011, in-rack and in-row cooling will be the predominant cooling strategy for highdensity equipment 2011, in chasis cooling technologies will be adopted in 15% the servers Source: Gartner; Meeting the DC power and cooling challenge 4

Where does the Power Go? Losses in power generate heat CRAC 50% Power Supplied in the DC Server/Storage 26% Conversion 11% Network 10% Lighting 3% Each watt consumed by IT infrastructure carries a burden factor of 1.8 to 2.5 for power consumption associated with cooling, conversion/distribution and lighting* Source: APC 5

Cooling Supply or Distribution? Current designs being specified to cool up to 30kW per rack Supply? Distribution? Cisco Partner tested at 30KW and were able to cool in a normal server rack Tested different configurations and had very different cooling airflow results Targeted cooling utilizing modular hot/cold segregation essential to future-proof designs 6

Cooling Strategies by Rack Density 25 20 Infrastructure Examples Cooling Options Consider In- row cooling or liquid-cooled racks Power per Rack (kw) 15 10 5 0 21 2RU Dual Core Servers 2 6509 s Hi Perf Blade Servers Fully ducted exhaust into hot aisle to prevent recirculation More than 2 tiles needed per rack. Consider Spot Cooling Vent rack exhausts, install closed cabinets and blanks to prevent recirculation 7

Enabling Hot Aisle/Cold Aisle Designs with High Density 6509 and 9513 Chassis Example: Panduit Cabinet 45RU (32 W x 40 D x 84 H) Up to 20kW/cabinet heat rejection capability 3 6509 s or 3 9513 s per Rack Front to back airflow into Hot Aisles Integrated Cable Management Modular design to support future air handlers or spot cooling Part # CN4-1 and CN4-2 for MDS 9513 and # CN4-3 for the Catalyst 6509E 8

Calculating Efficiency and Operating Costs 1MW for 1 Year = $ 486,618 KW = KVA x Power Factor Operating cost of a 1000 KW UPS system, having a 90% efficiency, at $0.10 per KWH, for one year at 50% load level Operating cost = 1000 KW x (1/.90 Efficiency at load level) x 8760 x $ 0.10 x load level 9

Affording the Next-Gen Data Center Legacy Server High-Density Server Power per Server 2-3 kw per rack > 20 kw per rack Power per Floor Space 30-40 W/ft² 700-800 W/ft² Cooling Needs chilled airflow 200-300 cfm 3,000 cfm Source: Gartner 2006 20,000 ft² Annual Operating Expense = $800k Legacy DC designed to accommodate 2-3kW per Rack 800kW +33% Annual Operating Expense = $4.6M* 100-200 Racks Introducing 1/3 high-density infrastructure into a legacy facility is cost prohibitive *Peripheral DC costs considered Presentation_ID 2006 Cisco Systems, Inc. All rights reserved. 10

Data Center Zoning Allows for a mixed environment of high density/low density Allows for targeted availability, service levels, cooling and UPS run-time Aligns well to virtualized environments Presentation_ID Power & Cooling Power & Cooling 20kW Per Rack 5kW Per Rack 2006 Cisco Systems, Inc. All rights reserved. 11

Data Center Evolution Best Practices Consolidation Reduce Operating Expense Can Strain Power Grids, HVAC Systems, and Control Planes Also an Opportunity to Audit Power and Cooling Design Power & Cooling Reduce Component Count Increases Density Assessment Needed to Determine OPEX for site Assessment may be Needed to Audit Existing CPI Choose Efficient Components (Systems) Balance Density/Space Assess new Technology Targeted, Incremental Changes for Efficiency Virtualize to Increase Utilization Share Practices Evolving to Green 12

Where Cisco has Impact 13

Cisco ACE with FWSM Reduces Power by 85% Component/Conversion Point Reduction Design Efficiency Incremental Power Required (W) Performance Requirement 14,000 12,000 11,400 11,300 13,300 10 Gbps load balancing 10,000 8,000 20 Gbps Firewall 6,000 10 Virtual Contexts 4,000 1,820 High availability 2,000 0 20 SLBs 4 Firewalls 20 SLBs 2 Firewalls 20 SLBs 2 Firewalls 2 ACE 8 FWSM 85% power reduction with virtualized, integrated modules ~ 11kW Rack space saved by using virtualized, integrated modules ~30RU Additional savings from reduced cabling, port consumption and support costs 14

Reduce Power Consumption Through Service Density Design Efficiency Support for 200 contexts Example: 10 Servers per SLB/FW Context ACE Module = 220W Incremental for SSL Termination + SLB FWSModule = 172W Incremental for FireWalling ACE + FWSM Handle 200 SLB/FW Contexts 392W Incremental Total Catalyst 6500 15

Appliance Model Repeats itself 700Watts at a time Design Inefficiency Example: 10 Servers per SLB/FW Context/Group 1) FW Appliance = 700W Incremental 2) SLB Appliance = 300W Incremental 3) App Firewall Appliance = 400W Incremental 700 Watts Incremental with each new Server Group (Items 2 and 3) 16

Appliance Model Repeats itself 700Watts at a time Example: 10 Servers per SLB/FW Context/Group Design Efficiency Design Inefficiency 392 Watts Incremental One Time Supports 200 Groups 700 Watts Incremental with EACH new Server Group 17

Appliance Model Deployed in Redundant Config Repeats itself 1400Watts at a time Example: 10 Servers per SLB/FW Context/Group Design Efficiency Design Inefficiency Redundant Configuration 800 Watts Incremental One Time Supports 200 Server Groups 1400 Watts Incremental with EACH new Server Group (x N) 18

Reduce Power Consumption Through Service Density Design Efficiency Support for 200 contexts Reduce complexity Increase manageability Reduce latency Eliminate single points of failure The ACE and FWSM deployed in a Catalyst 6500 provide these services within the network fabric, eliminating the appliances and their associated load 19

MDS SAN Fabric Virtualization is More Power Efficient than the Competition Legacy Solution SAN Island 1 (LSAN) Primary SAN Router Tape Library SAN Island 2 (LSAN) SAN Island 3 (LSAN) Backup SAN Router 3 x Other s Storage Switches Cisco MDS Solution VSAN 1 Tape Library VSAN 2 VSAN 3 MDS 9513 VSANs with Inter-VSAN Routing 20

Cisco MDS is more Power Efficient than the closest Competitor Apples to Apples Devices Required Stranded (wasted) Ports and Slots Max. Number of Isolated SANs Available Routing Bandwidth between SANs Power per Host Port Power Breakdown Other s Solution 11.98 watts 2 routers (600w each) 3 Storage Directors: 5 Yes - 90 ports plus 9 slots 3 Up to 5x4G ISLs: 20G 2 CPs @ 100w each 3 32-port modules @ 90w each 2 16-port modules @ 90w each 3 Chassis with fans @ 100w each Design Efficiency Cisco MDS Solution Backplane BW: 48G / slot 3.23 watts 1 Cisco MDS 9513: 2 48-port modules @ 130w each 2 12-port modules @ 92w each 2 Sup-2 @ 88w each 2 Crossbars @ 44w each 1 Chassis with fans @ 223w 1 None 256 Total Power Required 3450 Watts 931 Watts 21

Cisco s Virtualization Solutions Industry Leading Results Design Efficiency Common Physical Fabric Step1: Build a network solution using Virtual Fabrics MS Marketing SAN Sales SAN MS Step2 :Storage Virtualization Increase disk utilization to ~70% MS HR SAN Tape SAN Consolidate usable space in the actual Storage devices Taking a tape subsystem offline can save $3,800 in power and cooling per year Greater power savings and reduced footprint 22

CCRE and Sustainability Financial and Strategic Value The environmental benefits also create financial and strategic value through Lower Operating Costs Reduced Financial Risk Reduced Legal & Regulatory Risk Reduced Risk to Brand-Equity 23

Common Questions what are the variables? Name Plate or Average Operating Consumption? DC versus AC Power? How to calculate data center efficiency? How to measure at the systems level? Why should IT care? Cooling supply versus air distribution What is the networks role and where is it going? Where to learn more? Source: Gartner; Meeting the DC power and cooling challenge 24

In Closing Cisco s virtualization solutions slow the growth of power demand through increased utilization while reducing component count Cisco s next-generation products and solutions will further reduce power consumption at the systems level Cisco is focused on environmental concerns from executive direction to individual product design Cisco s CA group provides services for Power and Cooling auditing as a Network Readiness Assessment to assist with facilities support planning 25

What Can Be Done To Reduce Power Consumed by Network Services? Action Consolidate Networks Avoid Gateways and Consolidate Functions Virtualized Network Elements View Power Requirements Holistically Benefit/Implication Fewer Networks = Less Cost Reduce Storage Power Draw Specialized appliances are not power efficient due to redundant internal cooling, switching and power conversion elements 1 Network or Network Element per customer is power and space inefficient Consider technologies such as MPLS to enable future virtualization Prioritize efforts based upon reducing overall power consumption 26