Enabling an agile Data Centre in a (Fr)agile market Phil Dodsworth Director, Data Centre Solutions 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP is involved in the Technologies of Interest to Financial Services Companies: Cloud: computing as a service Green Computing: performance/watt Grid: performance/ with job execution SLAs & manageability X64 Architectures: multicore; NUMA; packaging Programming Multicore: parallelization, binding, tools Application Accelerators: specialized speed Low Latency Messaging: network; real time; applications Storage Arrays: commodity cost; converged in grid; resilient File Systems: scalable; distributed; resilient Server Virtualisation: improved utilization Desktop Virtualisation: centralized management, security
The Data Centre transformation need Driving trends. Data Centre Power Density up 10X in the last 10 years 2.1kw/Rack (1992); 14kw+/Rack in 2008 Increasing processing power & Moore s Law will continue to challenge the laws of physics Energy costs will continue to increase Server 3-Year energy cost roughly equivalent to as much as 2X acquisition cost in Europe Government energy targets and taxation set to increase via Climate Change Bill & CCL s By 2010 more than half of Data Centres will have to relocate to new facilities or outsource some applications 1 Power failures and limits on power availability will interrupt Data Centre operations at more than 90 percent of all companies over the next five years 2 1 AFCOM s Data Centre Institute, AFCOM 2006 2. EPA Report 2007 *Gartner 2007
How energy efficiency can cost your business Business impact related to power and cooling issues Server or system downtime Increased operational costs Data Centre outage Inability to add capacity Fail to support business growth Loss of revenue Lowered customer satisfaction
Delivering Energy Efficient Solutions for the IT Power & Cooling Chain Optimising from chip to chiller Up to 30% power savings Power Optimized ProLiant Servers: 18% less power Insight Power Manager & ilo 2: 10% power reduction w/regulator Power Distribution Rack & MCS 15% - 20% savings on power & cooling BladeSystem & Thermal Logic: 25% cost savings to power & cool Dynamic Smart Cooling: up to 45% cooling cost savings w/mapping HP Services: Thermal Mapping over 10% cooling cost reduction Storage Thin Provisioning/Dynamic Capacity Mgt saves up to 45% Virtualization/Consolidation: up to 40% reduction in power cost for data centers SFF Drives: 2.5 9 watts vs 18watts for 3.5 Power Supplies: 90% efficient supplies Low Power processors: up to half the power consumption Energy Saving Solutions from the Server Chip to the Data Center Air Chillers and everything in-between 5 2 December 2008
Industry needs to benchmark data centres Efficient components are necessary, but not sufficient HP introduces Power Usage Effectiveness (PUE) for the data centre Look at the ratio of building load to IT load as a measure of efficiency PUE = Building Load / IT Load Building load Demand from grid Power (Switch Gear, UPS, Battery backup, and so on) Cooling (Chillers, CRACs, and so on) Industry numbers suggest PUE = 1.6 Ideal 0% PUE = 2.0 Target 5% PUE = 2.4 Ave 10% PUE = +3.0 Poor 85% IT load Demand from servers, storage, telco equipment, and so on Also ASHRAE, Uptime Institute, EPA.. Source: Belady, C., Malone, C., Data Center Power Projection to 2014, 2006 ITHERM, San Diego, CA (June 2006)
How Can You Move to a Lower PUE? Step 1: Consolidate & Virtualize Remove unused applications and servers Requires Business Buy In Step 2: Turn on Energy saving technologies Virtual Server Environment, HP Power Regulator (incorporated with IPM), Dynamic Power Saver (Blade technology) Step 3: Follow Best Practices Guidelines from ASHRAE TC 9.9 & Uptime Institute HP has been promoting these with our clients Step 4: Optimizing the data centre using Computational Fluid Dynamics (CFD) HP Data Centre Thermal Assessment & planning services Step 5: Adopt closely coupled solutions e.g. HP Modular Cooling System, HP Performance Optimized Data Centre (POD)
HP MCS (Modular Cooling Systems) Ideal solution for high density racks and resolving datacentre hot-spots Certified and integrated by HP Modular & redundant design Up to 35kW of cooling capacity in a single rack or 17.5 kw each side (MCS II) CTO capable, up to 900Kg of IT equipment Uniform air flow across the front of the servers Safety features to prevent water contact with servers Adjustable server air temperature set point Does not add to significant heat load to datacenter Level 2 Integration with HP-SIM
HP POD Standard 47U 10000 Series racks (optional higher density model available) Hot aisle with rear access through doors in the container High efficiency heat exchangers (HEX) from HP MCS High efficiency, variable speed blowers from HP MCS Separate Utility module segregate IT/UPS security access and environmentals Facilities management on exterior of cold aisle 36 cold aisle can run at >90F
The HP IT Mission Provide good information to enable better business decisions Significantly reduce the cost of IT while delivering more to the business Lower risk to the enterprise with better control of the infrastructure Be a showcase for enterprise customers
HP IT 2005: Large scale and scattered <50% of resources time dedicated to innovation IT 4+% of revenue 750+ data marts 100+ HP IT sites in 53 countries 30% IT managed by IT 6,000 applications Under- managed network 85+ data centres in 29 countries 1,240+ active IT projects
HP Data Centre Locations Consolidating >85 Global Data Centres to 6 in 3 U.S. geographical zones (Austin, Houston, Atlanta) chosen for: Proximity to major fiber optic backbones Access to multiple power grids Costs Total white space 410,000 sq. ft. Within each zone: 2 sites within 10-25 mile radius of each other Each site designed for high availability, disaster recovery and business continuity Zone A Site 1 Site 2 Zone B Site 3 Austin Houston Site 4 Zone C Site 5 Site 6 Atlanta
HP IT Production Data Centres US Locations Global Data Centres Austin 829 Miles Houston Atlanta 150 Miles 717 Miles Austin - Site 1 - Completed 11/06-125KSF Houston -Site 3 Atlanta -Site 5 - Completed 06/06 - Completed 01/07-1700 server addition - 50KSF raised floor Austin - Site 2 - Completed 06/07 - Greenfield 50KSF 2600 Pinemeadow Houston -Site 4 Atlanta -Site 6 - Completed 05/07 - Completed 07/07 - Greenfield 100KSF - Greenfield 50KSF KSF = 1,000 square feet
Design of Mission Critical Facilities There s a lot to consider. Fault Tolerance Modular approach to provide flexibility in - Size, Density and Reliability Changing Operating Environment Balancing Reliability; Capex and Opex Maximising Return on Investment Automated operation - minimal human intervention Load Densities Operational Costs Impact of Maintenance on Resilience External Threats e.g. natural, terrorism etc. Protection of existing service to clients Audit Provision
EYP MCF: profile EYP MCF focus on the strategic planning, design, and operations of critical facilities Designed over 32 million sq. ft. of Data Centre space Planned and designed 50+ Greenfield Data Centres Designed fifteen 15MW+ Data Centres, inc. five 35MW+ Data Centres 200,000 tons of critical cooling design 23 million sq. ft. of Data Centre risk, reliability & feasibility assessments Commissioned 15 million sq. ft. of critical facilities Ranked one the Top Firms Worldwide Specialising in Data Centre & Telecommunications Facilities (Engineering News Record) Consultants to 14 out of the top 15 Largest US Financial Institutions, 4 out of the top 5 search providers and 5 out of the top 5 Telecoms, Networks and Communications companies
Data Centre Design Flexibility the key criteria Modular design / standards Zone, Site, Module, Cell, Pod concept Bay & Chase design contains white space Pods can be configured to fit site geometry and topology Cells will be configured to fit expansion, power and physical requirements Ability to deploy quickly & reconfigure Cost drivers / advantages 2x available power per square foot from average 60W to 120W+ Physical area Efficient cooling capacity 60% Savings on energy consumption Power & cooling Dynamic Smart Cooling (DSC) turns cooling capacity into a variable cost
HP IT mission: Streamlined and focused % of revenue cut in half < 30 HP IT core collaboration sites WW ~1500 applications 80% of resources time dedicated to innovation 1 EDW 100% IT managed by IT 6 NGDCs in 3 zones Optimized, cost-effective & secure network ~500 active business projects at any given time
Technology for better business outcomes