How To Power Your Computer

Similar documents
Managing Data Center Power and Cooling

Juniper Networks QFabric: Scaling for the Modern Data Center

How To Power And Cool A Data Center

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

SummitStack in the Data Center

Unified Computing Systems

White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology

Cut I/O Power and Cost while Boosting Blade Server Performance

Server Consolidation and Remote Disaster Recovery: The Path to Lower TCO and Higher Reliability

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Dell Force10. Data Center Networking Product Portfolio. Z-Series, E-Series, C-Series, and S-Series

WHITE PAPER. Copyright 2011, Juniper Networks, Inc. 1

Juniper Update Enabling New Network Architectures. Debbie Montano Chief Architect, Gov t, Edu & Medical dmontano@juniper.

GREEN FIELD DATA CENTER DESIGN WATER COOLING FOR MAXIMUM EFFICIENCY. Shlomo Novotny, Vice President and Chief Technology Officer, Vette Corp.

Enabling an agile Data Centre in a (Fr)agile market

SummitStack in the Data Center

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

A 10 GbE Network is the Backbone of the Virtual Data Center

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Next Generation Data Centre ICT Infrastructures. Harry Forbes CTO NCS

Block based, file-based, combination. Component based, solution based

Cisco SFS 7000P InfiniBand Server Switch

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series

Flexible Modular Data Center Architecture Simplifies Operations

Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband

Building Tomorrow s Data Center Network Today

Reducing Data Center Energy Consumption

The On-Demand Application Delivery Controller

Green Data Center and Virtualization Reducing Power by Up to 50% Pages 10

White Paper Solarflare High-Performance Computing (HPC) Applications

Leveraging Radware s ADC-VX to Reduce Data Center TCO An ROI Paper on Radware s Industry-First ADC Hypervisor

David Lawler Vice President Server, Access & Virtualization Group

Dealing with Thermal Issues in Data Center Universal Aisle Containment

LAYER3 HELPS BUILD NEXT GENERATION, HIGH-SPEED, LOW LATENCY, DATA CENTER SOLUTION FOR A LEADING FINANCIAL INSTITUTION IN AFRICA.

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

BC43: Virtualization and the Green Factor. Ed Harnish

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions

Data Center Networking Designing Today s Data Center

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Sanjeev Gupta Product Manager Site and Facilities Services

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

Cloud-ready network architecture

Using a Fabric Extender with a Cisco Nexus 5000 Series Switch

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at

Upgrading Data Centers: Can New Models and Technologies Improve Performance? Tuesday, October 9, :45 a.m.

Cloud Computing, Virtualization & Green IT

How Microsoft IT Developed a Private Cloud Infrastructure

THE PATH TO A GREEN DATA CENTER. Hitachi Data Systems and Brocade. Joint Solution Brief

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Chapter 1 Reading Organizer

Data Center Equipment Power Trends

Transformation of the Enterprise Network Using Passive Optical LAN

Addressing Scaling Challenges in the Data Center

Cisco SFS 7000D Series InfiniBand Server Switches

White Paper. Best Practices for 40 Gigabit Implementation in the Enterprise

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure

Cisco UCS B-Series M2 Blade Servers

EMC VMAX3 FAMILY - VMAX 100K, 200K, 400K

How To Build A Cisco Uniden Computing System

Business Case for BTI Intelligent Cloud Connect for Content, Co-lo and Network Providers

Brocade Solution for EMC VSPEX Server Virtualization

VISIT 2010 Fujitsu Forum Europe 0

ethernet alliance Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Version 1.0 August 2007 Authors:

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

OPTIMIZING SERVER VIRTUALIZATION

The following are general terms that we have found being used by tenants, landlords, IT Staff and consultants when discussing facility space.

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Optimum Climate Control For Datacenter - Case Study. T. Prabu March 17 th 2009

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Data Communications Hardware Data Communications Storage and Backup Data Communications Servers

Cisco 7816-I5 Media Convergence Server

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

A Platform Built for Server Virtualization: Cisco Unified Computing System

Cloud Computing Data Centers

Increasing Energ y Efficiency In Data Centers

Top Five Things You Need to Know Before Building or Upgrading Your Cloud Infrastructure

Simplify the Data Center with Junos Fusion

Why Data Center Design and Innovation Matters

ADC-APC Integrated Cisco Data Center Solutions

Data Center Architecture with Panduit, Intel, and Cisco

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

Data Center Evolution and Network Convergence

Arista and Leviton Technology in the Data Center

UCS M-Series Modular Servers

ECONOMICS OF DATA CENTER OPTICS SPIE PHOTONICS WEST INVITED PAPER (9775-2) FEBRUARY 16, 2016

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Green HPC - Dynamic Power Management in HPC

Deliver More Applications for More Users

Questions and Answers

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

Business Case for the Brocade Carrier Ethernet IP Solution in a Metro Network

Transcription:

Managing Data Center Power & Cooling Debbie Montano dmontano@force10networks.com

Agenda Data Center Power Crunch Strategies for Reducing Power Across IT Power Efficiencies in Networking Today and Moving Forward Customer Case Studies Q&A 2

The Greening of The Data Center Until recently power efficiency in the Data Center has not been paramount in IT rollouts This is now changing and is being driven by: Rising power costs Blackouts/Brownouts and capacity planning Limits to grid/sub-station scaling (No more power available) Politic pressures and green legislation to drive greater data center efficiencies If we as an industry don t lead the process we will be dragged to an unacceptable position 3

Energy & Power Energy Joule (J) Watt-Hour (Whr) Kilowatt-Hour (KWhr) British Thermal Unit (BTU) Power rate of use of energy Joule/second Watt == Joule/second Kilowatt = 1000 Watts BTU / hour 1 Watt == 3.413 BTU/hr Analogy: Gallons of water Gallons/hour = rate of use of water Ton (usually = 12,000 BTU/hr) 4

Data Center Crisis: Power/Cooling Moore s Law: More Transistors More MIPs More Watts More BTUs 1 watt of power consumed requires 3.413 BTU/hour of cooling to remove the associated heat Data Center Power Density Went from 2.1 kw/rack in 1992 to 14 kw/rack in 2006 3 Year Costs of Power and Cooling, Roughly Equal to Initial Capital Equipment Cost of Data Center 5 63% of 369 IT professionals said that running out of space or power in their data centers had already occurred

Growing Power Density 6

Growing Power Density Culprit or Savior? Brave New World of >15 KW per sq foot 7

Force10 Customers Data Center Power Considerations 8

Force10 Customers Data Center Power Considerations 9 1. Prime mover in budgets 2. Network 10% of power budget Biggest relief by increasing density and utilization 3. From planning to build >12 months

Agenda Data Center Power Crunch Strategies for Reducing Power Across IT Power Efficiencies in Networking Today and Moving Forward Customer Case Studies Q&A 10

The Big Picture The Watt Walkthrough Total system efficiency comprises three main elements- the Grid, the Data Centre and the IT Components. Each element has its own efficiency factor- multiplied together for 100 watts of power generated, the CPU receives only 12 watts 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Grid Data Centre IT Components 11

A Series of Conversion Efficiencies Fuel Source Carbon Conversion factor Renewables Server Utilisation Operating System Software Optimization Grid Data Centre IT Components 12

A Series of Conversion Efficiencies Carbon Fuel Source Carbon Conversion factor Renewables Server Utilisation Operating System Software Optimization Grid Data Centre IT Components 13

A Series of Conversion Efficiencies Carbon Grid Fuel Source Carbon Conversion factor Renewables Server Utilisation Operating System Software Optimization Grid Data Centre IT Components 14

A Series of Conversion Efficiencies Carbon Grid Data Centre Fuel Source Carbon Conversion factor Renewables Server Utilisation Operating System Software Optimization Grid Data Centre IT Components 15

A Series of Conversion Efficiencies Carbon Grid Data Centre IT Fuel Source Carbon Conversion factor Renewables Server Utilisation Operating System Software Optimization Grid Data Centre IT Components 16

A Series of Conversion Efficiencies Carbon Grid Data Centre IT OS/Software Fuel Source Server Utilisation Carbon Conversion factor Operating System Renewables Software Optimization Grid Data Centre IT Components 17

Data Center Best Practices Majority of efficiency improvement from rectifying inefficient cooling (60% of your wattage work set) Hot/Cold Row Cooling Minimize leakage/blocking/bypasses 18

Tips Stay diligent about hot and cool aisle flow Ruthless air flow watch out for cabling, other obstructions. Blank your racks as well as your slots How cool? --- the mid-point of recommended range is 74 degrees and 50% humidity 1 Get your tiles rights perforated for cold, solid for hot Read your electric bill. 191. Source: American Society of Heating, Refrigeration and Air Conditioning Engineers [ASHRAE] TC9.9

IT Comprehensive Approach Minimize power consumption and maximize power efficiency at every level within the infrastructure CPU Chips Power Supplies Servers Storage Devices Cabling Networking 20

CPU Chips 21 Power efficient architectures A dual core processor can deliver >60% higher performance than a single core processor dissipating the same power e.g. integrated memory controllers Application-specific multi-core chip architectures include cluster computing, transaction processing, and multitasking. Processor Power Management with Dynamic Clock Frequency and Voltage Scaling (CFVS) Future: Transistors with Lower Leakage Current replace the silicon dioxide gate dielectric with hafniumbased high-k material

Clock Frequency and Voltage Scaling 22 Dynamically adjusting CPU performance (via clock rate and voltage) to match the workload Uses the operating system s power management utility via industry-standard Advanced Configuration and Power Interface (ACPI) calls 75% power savings at idle and 40-70% power savings for utilization in the 20-80% range Power (watts) Workload per watt 0% Idle 0% Idle Without CFVS CPU Utilization (%) With CFVS CPU Utilization (%) Without CFVS 100% 100%

Servers 23 Blade Servers: Chassis sharing can reduce power consumption by 20-50% Larger chassis are more efficient (>80%) Blade servers inspired by modular switch/routers Even esoteric edge improvements scale up Server Virtualization: Applications consolidated on a smaller number of servers, eliminating power consumption by many low utilization servers dedicated to single applications Potentially #1 improvement (5-20x compression ) Application 1 Guest OS 1 Virtual Machine 1 VM Monitor/Hypervisor Physical Machine Application N Guest OS N Virtual Machine N

Storage 24 Power consumption in storage devices is primarily by spindle motors and is largely independent of the capacity of the disk The bigger the disk, the better Maximize TBytes/watt to the highest capacity disks Must keep I/O characteristics compatible with the applications being served Unified Ethernet storage virtualization technologies and large-scale tiered storage maximize power efficiency by minimizing storage over-provisioning

Agenda Data Center Power Crunch Strategies for Reducing Power Across IT Power Efficiencies in Networking Today and Moving Forward Customer Case Studies Q&A 25

Switch/Routers 26 Little difference in Gbps/watt for fixed configuration and stackable switches Considerable differences for modular switch/routers due to backplane technology and box-level densities Heavy copper traces reduce backplane resistance and wasted power consumption Force10 E-Series uses patented 4 layer, 4 ounce copper backplane that has power efficiency of 4.5 Gbps/watt (= backplane capacity/ power consumption) 10-20x less resistance

Unified Data Center Fabric Ethernet can provide LAN connectivity, storage networking, and cluster interconnect across the data center With a unified fabric, power is conserved No additional sets of switches for specialized fabrics Higher utilization on existing switches Only one network adapter per server Efficient cable management LAN Users Cluster IPC NAS Servers SAN or Storage Cluster Disc Arrays 27

Virtualization-Ready Networking Single, most effective way to manage and scale power is to increase server/storage/network utilization Applications draw on a shared pool of resources SOA Applications Services Traditional Applications Cluster Applications No resources dedicated to a single application higher utilization Enterprise Service Bus Workload Management Infrastructure Virtualization Cluster Middleware Workloads of various applications peak at different times in the business cycle Shared resource model: Do the same job with far fewer resources Switches Firewalls/ Load Balancers Servers Common Pool of Computing Resources Storage Switches Firewalls/ Load Balancers Servers Storage 28 PODs

E1200 System Power System configured with full switch fabric, route processor, and power redundancy and 672 line-rate GbE 1000 Base-T ports 29 Slot Watts Chassis (common + SFMs) 1,055 0 48-port 1000base-T (LR) 290 1 48-port 1000base-T (LR) 290 2 48-port 1000base-T (LR) 290 3 48-port 1000base-T (LR) 290 4 48-port 1000base-T (LR) 290 5 48-port 1000base-T (LR) 290 6 48-port 1000base-T (LR) 290 RP0 RPM-EF 125 RP1 RPM-EF 125 7 48-port 1000base-T (LR) 290 8 48-port 1000base-T (LR) 290 9 48-port 1000base-T (LR) 290 10 48-port 1000base-T (LR) 290 11 48-port 1000base-T (LR) 290 12 48-port 1000base-T (LR) 290 13 48-port 1000base-T (LR) 290 Total Power 5,365 Power in Watts/Gbps 8 (=5365/672) DC Current @ 40V Copyright 134 2008 Force10 Networks, Inc

Maximizing Network Power In the Core and Data Center E-Series resilient, scalable, high density switches collapsed Distribution/ Access Tier--2-Tier switching elimination of numerous low density switches E-Series in the Core Firewall/LB Layer 2/3 Routers & Switches Power Saved on 270 Node DC 30 2 Force10 E1200s = 10,600 watts 5 Catalysts 6000s = 20,000 watts Source: Enterprise Rental Car Servers NAS/SCSI

Maximizing Network Power In the Wiring Closet C-Series in the Core C-Series resilient, scalable, high density wiring closet switches Collapsed Distribution/ Access Tier--2-Tier switching Eliminate numerous low density switches Wiring Closet Vertical Risers Core Distribution/ Access Switch Comparison Large Campus 31 3 C-Series = 9,400 watts 8 Catalyst 4500s = 22,000 watts

Industry-leading density and flexibility Introducing the Force10 S2410 Highest Density, Lowest Latency and Price 24 line-rate 10 GbE ports in 1 RU Full function switch XFP or CX4 interfaces Drives down 10 GbE port prices to spur adoption List pricing starting at $24,000 (CX4) Reduces 10 GbE switching latency to Infiniband levels 300 nanoseconds S2410 Fiber and CX4 Versions Lowest power consumption of any switch on the planet 480 Gbps switching on 125 watts!!! 32

Power Going Forward 33 Becoming a key metric for product comparison Servers: Application workload/watt (e.g., Mflops/watt) Storage: GBytes/watt Networking: Gbps/watt IEEE Energy Efficient Ethernet working group EPA considering Energy Star Rating for Data Center equipment, including switch/ routers Force10 member of TheGreenGrid.org and can provide updated power calculators to model power and cooling in TCO calculations

Timing of Application Needs 1,000,000 Rate Mb/s 100,000 100 GbE 40 GbE 10,000 10 GbE 1,000 1 GbE 100 1995 2000 2005 2010 2015 2020 Date 34Source: IEEE 802.3 HSSG

Agenda Data Center Power Crunch Strategies for Reducing Power Across IT Power Efficiencies in Networking Today and Moving Forward Customer Case Studies Q&A 35

Enterprise Rental Car Remove Wasteful Interconnects OP-EX Power 3,760 W BTU / hr 12,812 4 x 10 GbE uplinks OP-EX Power 19,920 W BTU / hr 67,960 4 x 10 GbE uplinks 48 x 10 GbE Interconnects 270 line-rate nodes 90 90 90 270 line-rate nodes 36 CAP-EX 75% lower up front cost, > $1 million savings One device, versus five 28 less line cards OP-EX 81% less power 81% less cooling needed (air conditioning) 80% less rack space

Client Requirements MareNostrum Barcelona Super Computing Centre Build #1 super computing center in Europe to focus on computational, earth and life sciences Location -TorreGironaChapel 153 sqm with 2,560 GbE nodes - 94.21 tera flops Non-blocking supercomputing Create a scalable, flexible environment Solution Raised floor to accommodate high flow reqs Cooling water storage tanks IBM blue gene and 1350 blade servers drove massive Gigabit densities Benefits 37 High density Ethernet (8 watts/gbe) Supports 21KW/rack (400 W/sq ft) of cooling Flexibility for the future supercomputing performance upgrade underway World s most beautiful supercomputing center

Yahoo! Case Study Client Requirements Bandwidth doubling every year Expects 10 GbE server scale in 1-3 yrs. 20 Gigabit bandwidth in metro transport Explicitly dual-vendor interoperability a must Solution 38 Running 80 KM WDM optics POD design with 300+ GbE nodes Extreme Gigabit densities Benefits Power footprint of 2.5 KW per 300 nodes 1/3 the cooling budget of previous switch and over $2.5M in power & cooling savings in 3 years Substantial (4-8x) saving over SONET

Coming Soon 39

Q & A 40 Force10 Confidential. Do not Distribute.

Thank You Debbie Montano dmontano@force10networks.com