Regional hosted data centre research project for the University of East Anglia



Similar documents
Premium Data Centre Europe Pricing, Business Model & Services

Auditing a data centre but to what standard?

DATA CENTRE PRICING UK 2014 to 2019

Auditing a Data Centre But to What Standard?

This 5 days training Course focuses on Best Practice Data Centre Design, Operation and Management leading to BICSI credits.

Data Centre Real Estate Market Update

DataCenter 2020: first results for energy-optimization at existing data centers

Datacentre Studley. Dedicated managed environment for mission critical services. Six Degrees Group

EU Code of Conduct for Data Centre Efficiency

Code of Conduct on Data Centre Energy Efficiency. Endorser Guidelines and Registration Form. Version 3.0.0

Data Centre Basiglio, Milan Flexible, advanced and efficient by design.

Green Data Centers. Energy Efficiency Data Centers. André ROUYER Director of Standardization & Environment. The importance of standardization

A Market Transformation Programme for Improving Energy Efficiency in Data Centres

Data Centre Outsourcing a Buyer s Guide

The European Programme for Energy Efficiency in Data Centres: The Code of Conduct

How green is your data center?

Data Center Facility Basics

Re Engineering to a "Green" Data Center, with Measurable ROI

ICT and the Green Data Centre

Capgemini UK Infrastructure Outsourcing

CARRIER NEUTRAL DATA CENTERS: TERMINOLOGY, SERVICES, OPERATIONS

Overview of Green Energy Strategies and Techniques for Modern Data Centers

Data Centre Pricing UK to 2020

Greening Commercial Data Centres

Data Centre Stockholm II, Sweden Flexible, advanced and efficient by design.

AIA Provider: Colorado Green Building Guild Provider Number: Speaker: Geoff Overland

Data Centers: Definitions, Concepts and Concerns

Datacentre London 1. Dedicated managed environment for mission critical services. Six Degrees Group

Gas Absorption Heat Pumps. Future proofing your heating and hot water

GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY. Public Sector ICT Special Working Group

Data Center Industry Leaders Reach Agreement on Guiding Principles for Energy Efficiency Metrics

Data Centre Energy Efficiency: A Call To Action

Reducing Data Center Energy Consumption

Managing Power Usage with Energy Efficiency Metrics: The Available Me...

ALWAYS ON GLOBALSWITCH.COM

Recommendations for Measuring and Reporting Overall Data Center Efficiency

Hybrid heat pumps. saving energy and reducing carbon emissions

European Business Reliance Centres when Commodity serves Green IT

Smart Meters Executive Paper

How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems

University of Melbourne Symposium on ICT Sustainability Dan Pointon 25 November 2008

Sovereign. The made to measure data centre

Minimising Data Centre Total Cost of Ownership Through Energy Efficiency Analysis

Key Solutions CO₂ assessment

Analysis of data centre cooling energy efficiency

COFELY DISTRICT ENERGY DELIVERING LOW CARBON SUSTAINABLE ENERGY SOLUTIONS

Non Domestic energy consumption 2013 Kent Local Authorities (Previously Industrial & Commercial energy use)

carbon footprinting a guide for fleet managers

Colocation, Hot Seat Services, Disaster Recovery Services, Secure and Controlled Environment

Energy Saving Fact Sheet Energy Management

Recommendations for Measuring and Reporting Overall Data Center Efficiency

Building a data center. C R Srinivasan Tata Communications

Metrics for Data Centre Efficiency

Harmonizing Global Metrics for Data Center Energy Efficiency

BRUNS-PAK Presents MARK S. EVANKO, Principal

Increasing Energ y Efficiency In Data Centers

Energy Efficient Server Room and Data Centre Cooling. Computer Room Evaporative Cooler. EcoCooling

Energy efficiency and excess winter deaths: Comparing the UK and Sweden

OPERATOR - FACILITY ADDRESS - DATE

Four of the twelve Kent districts (Dartford, Gravesham, Shepway and Thanet were below the National average (4,099 kwh) for electricity

DATA CENTERS BEST IN CLASS

Managing Data Centre Heat Issues

Energy Efficient Data Centre at Imperial College. M. Okan Kibaroglu IT Production Services Manager Imperial College London.

Challenges In Intelligent Management Of Power And Cooling Towards Sustainable Data Centre

Legacy Data Centres Upgrading the cooling capabilities What are the options?

Measuring Power in your Data Center

Engineers Ireland Cost of Data Centre Operation/Ownership. 14 th March, 2013 Robert Bath Digital Realty

Energy Efficiency Best Practice Guide Data Centre and IT Facilities

Measuring Energy Efficiency in a Data Center

Data Centre Pricing Germany 2014 to 2019 Report

EXAMPLE OF A DATA CENTRE BUILD

Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers

Guideline for Water and Energy Considerations During Federal Data Center Consolidations

Subject: County of Los Angeles Data Center Space Requirement

Drives and motors. A guide to using variable-speed drives and motors in data centres

Code of Conduct on Data Centres Energy Efficiency Version 1.0

Green Computing: Datacentres

Making Existing Server Rooms More Efficient Steve Bowes-Phipps, Data Centre Manager University of Hertfordshire.

Green Computing: Datacentres

News in Data Center Cooling

SP Energy Networks Business Plan

Data Centre Pricing in Europe 2013 to 2018

Our financing of the energy sector in 2013

Excool Indirect Adiabatic and Evaporative Data Centre Cooling. World s Leading Indirect Adiabatic and Evaporative Data Centre Cooling

Power and Cooling for Ultra-High Density Racks and Blade Servers

A New World Of Colocation

Our Hosting Infrastructure. An introduction to our Platform, Data Centres and Data Security.

Metering and Managing University of Sheffield Data Centres. Chris Cartledge Independent Consultant

Title: Design of a Shared Tier III+ Data Center: A Case Study with Design Alternatives and Selection Criteria

San Francisco Chapter. Deborah Grove, Principal Grove-Associates

Energy Efficient Cooling Solutions for Data Centres

Enabling an agile Data Centre in a (Fr)agile market

Transcription:

CAPITOLINE Limited Liability Partnership Regional hosted data centre research project for the University of East Anglia A Report into the technical options and costs of Green Data Centre construction Prepared by B J Elliott BSc, MBA, C.Eng, MIET, MCIBSE belliott@capitoline.co.uk Capitoline LLP 2010 Capitoline LLP Capitoline House 6 High Grove Welwyn Garden City Herts AL8 7DW England www.capitoline.co.uk 0800 0148014 Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 1 of 49

Amendment record 1 First issue 26-1-10 BJE 2 Second issue 12-2-10 BJE 3 Third issue 22-2-10 BJE 4 Fourth issue 23-2-10 BJE 5 Fifth issue 16-3-10 BJE Management Summary The UEA currently runs two computer rooms totalling 110 racks over 418 m 2. This could be condensed into 40 racks taking about 140 m 2 if this was required. The UEA currently expends about 280 kw on the two computer rooms. There is a market in East Anglia and the UK for high quality, low carbon data centre space of Tier 3 reliability. The current average monthly rental is 1840 per 32 amp rack per month. The optimal size for a new data centre build is between 60 and 140 32 amp racks costing between 2million and 4million to build. Between 70% and 80% load factor is required to break even and cover build and running costs. Load factors above that would be profitable. There are many competing ways to define the green credentials of a data centre. The most popular is the PUE/DCiE model from the Green Grid and EU. There are many technologies available to reduce the carbon footprint but most have a high initial capital cost and careful economic analysis must be employed. The UK climate lends itself to more efficient air-conditioning models such as air economisers. The UEA could build a data centre on a green field site near the campus or use existing facilities on the campus. Computer Room 2 is of adequate size to provide 60 colo racks but additional support space around it would have to be dedicated to it. The UEA would have a significant marketing advantage if it could use the campus gasification power plant to contribute power and cooling to a new data centre. Contents Management Summary... 2 1 Introduction... 3 2 What is a data centre?... 3 3 The Green Agenda... 4 4 Resilience models for data centres... 11 5 The data centre market... 14 6 Data Centre design and Build costs... 20 7 What makes a data centre Green?... 31 8 The carbon footprint of a data centre... 39 Appendix 1... 42 Appendix 2. Infinity Dark Green Energy Data Centre... 44 Appendix 3: Optimal sized data centre... 46 Appendix 4... 47 Appendix 5: Existing computer room facilities at UEA... 48 This document is copyright of Capitoline LLP. Every effort has been made to ensure an accurate representation of the facts. Capitoline LLP seeks to meet the highest standard of quality information and every attempt has been made to present up to date and accurate information. However, Capitoline cannot be held responsible for the misuse or misinterpretation of any information and offers no warranty to its accuracy or completeness. Capitoline accepts no liability for any loss, damage or inconvenience caused as a result of reliance on this information. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 2 of 49

1 Introduction Capitoline has been commissioned to produce a report to consider the feasibility concerning the development of regional data centres with the objective of saving energy resources. The report was commissioned by Sustainable ICT Services Provision (SISP) project, University of East Anglia (UEA). Additional information has been supplied by Dr Simon Gerrard of UEA LCIC. The report will consider: What is a data centre and what they do: technical definition and examples The green agenda and green metrics Resilience models for data centres, e.g. Tier 1-4, N, N+1 and 2N models Getting planning permission for a data centre, meeting Part L and M (and the rest) of the Building Regulations The data centre market, collocation, carrier neutral hotels, build-your-own Establishing what services could be offered e.g. renting by the square meter, by the rack, by the kilowatt, providing telecommunications and other managed services etc. Calculating the size of a data centre from an IT requirement A 20, 40 and 60 rack model. The 20 and 40 rack model proposal will be based on the assumption that the University s existing facilities will be used and renovated. The 60 rack model will be based on a green field model in the East Anglia area For the 20,40 and 60 rack models we will calculate spaces required, cooling, power and other facilities requirements Using standard industry data and our own in-company research we will estimate the build and running costs of the models and the total cost of ownership Our designs will be based on the latest low energy-high efficiency solutions but will also identify where lower energy running costs also means a higher initial capital cost Commercial competitors in the East Anglia region and the going rates for collocation and managed services will be investigated. From this data and the TCO/build cost estimates described above we will calculate probable rates of return and ROI models against varying levels of occupancy. 2 What is a data centre? If we take the definition from the TIA 942 1 Standard; A data centre is a building or portion of a building whose primary function is to house a computer room and its support areas, We can take from this that the computer room is still the engine room of the operation: it is after all the place where the computers (servers), storage and telecommunications equipment is located. The data centre is the infrastructure that makes sure that the computer room can fulfil all its functions, with the correct power and environmental controls, providing connectivity within the data centre and to the outside world, and providing all this within the secure borders of the data centre. The words Computer Room and Data Centre are often taken to be synonymous but we can see from the above definitions that this is not correct. Apart from the computer room or rooms, the data centre must provide; Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 3 of 49

A control room where all the control and monitoring functions of the data centre are centred General office and meeting space Secure pedestrian and goods entrances Unloading, unpacking and equipment build areas Space for mechanical and electrical (M&E) plant Telecommunications entrance facilities to terminate external telecommunications cables in an organised manner General staff welfare facilities Storage facilities for equipment and maybe tape backup External space for o Generators o Fuel storage o Air conditioning plant o Parking o Loading and unloading heavy equipment Appendix I gives more detail of the data centre functional space requirements. UEA current facilities. UEA currently has two computer rooms of approximately 170 m 2 (CR1) and 248 m 2 (CR2). CR2 has an adjacent control room but it is not currently in use. CR1 has and adjacent control room area. Both areas have secure entrances but the support services, such as power and cooling come from areas that also serve the rest of the campus. Thus neither of the computer rooms could be defined as data centres in that they do not have all their own dedicated support facilities within one secure border. CR2 is an ideal space for a third party data centre but its access, security and monitoring would have to be reviewed and a dedicated unloading, store and build area would have to be found before it could be used as a colocation centre. 3 The Green Agenda It has become apparent over the last few years that information technology in general, and data centres in particular, are vast consumers of electricity (and other resources such as water) and hence are indirect major contributors in their own right to carbon dioxide output into the atmosphere. Government interest in this area started with U.S. EPA ENERGY STAR Report to Congress on Server and Data Center Energy Efficiency. Public Law 109-431 2 in 2007. This survey showed that electricity consumption for IT and data centres was taking 1.5% of all US electrical production by 2005 and was predicted to take 2.5% of total U.S. electricity use by 2012. This equates to $7.4 billion in electricity costs and around 68 Million tonnes of CO 2. Regardless of CO 2, output electricity demands of this magnitude make it a strategic issue. If, according to the IPCC 3 (Intergovernmental Panel on Climate Change), aviation accounts for around 2% of global emissions then the data centre industry must now be about to overtake the aviation industry. Figure 1 shows the predicted growth in power consumption for American data centres with five scenarios. The scenarios denote a do nothing policy, small improvements, best practice and technical state of the art approaches. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 4 of 49

Figure 1: Historic and projected electricity use of U.S. data centres 2 The European Union became involved in 2008 via the EC DIRECTORATE-GENERAL JRC JOINT RESEARCH CENTRE Institute for Energy. According to them: 4 Electricity consumed in data centres, including enterprise servers, ICT equipment, cooling equipment and power equipment, is expected to contribute substantially to the electricity consumed in the European Union (EU) commercial sector in the near future. Western European electricity consumption of 56 TWh per year can be estimated for the year 2007 and is projected to increase to 104 TWh per year by 2020. The projected energy consumption rise poses a problem for EU energy and environmental policies. It is important that the energy efficiency of data centres is maximised to ensure the carbon emissions and other impacts such as strain on infrastructure associated with increases in energy consumption are mitigated. Google is one of the world s largest users and owners of data centres and in 2009 they came under criticism in the media for their apparent profligate use of energy. This led them into publishing the actual average power consumed, and CO 2 generated by the average Google search. They arrived at this by simply dividing their total global energy bill by the annual number of searches undertaken. This resulted in the average Google search accounting for 0.0003 kw.hrs or 0.2 grams of CO 2. (Published on www.google.com). According to British Land, in 2005 the average UK office block took around 140 kw.hrs/m 2 /p.a. From Capitoline s internal data we estimate the average data centre takes 7500 kw.hrs/m 2 /p.a or 52 times the energy density of the average office. With electricity prices at around 10 pence per kilowatt hour this means every square metre is costing around 750 per annum to run. If we use the British government s own electricity to CO 2 calculator (carbon dioxide factor for grid displaced energy 5 ) we can convert this figure to say that every 600 mm x 600 mm average computer room floor tile is accounting for 1.2 tonnes of CO 2 emissions per year. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 5 of 49

The need to reduce energy consumption and carbon dioxide output has started to find its way in national laws. In Europe we have Energy Performance of Buildings Directive 2002/91/EC Energy Services Directive 2006/32/EC Which in turn find their way into UK laws by means of Sustainable and Secure Buildings Act 2004 The Energy Performance of Buildings Regulations SI 2007/991 Climate change and sustainable energy Act 2006 The Building Regulations 2000 o Part L - Buildings other than dwellings L2A: Conservation of fuel and power (New buildings other than dwellings) (2006 edition). Approved Document L2B: Conservation of fuel and power (Existing buildings other than dwellings) (2006 edition) Energy consumption and efficiency in the UK has been under the auspices of DEFRA (Department for the Environment, Food and Rural Affairs) but with input now from the Department of Energy and Climate Change, DECC, and the Department of Business Enterprise and Regulatory Reform, DERR. Lord Hunt, Minister for Sustainable Development and Energy Innovation, welcomed the launch of the EU Code of Conduct for Data Centres and encouraged data centre operators to adopt the Code, saying: If we are to tackle dangerous climate change, we need to reduce emissions and the decision businesses make play a key role in meeting this challenge. By signing up to this new code of conduct companies can save energy and save money too, which goes to show that what s good for the environment is good for business. The UK is the first country in the world to set legally binding targets for reducing greenhouse gas emissions. In order to achieve the ambitious target of an 80 per cent reduction in greenhouse gasses by 2050, everyone must play a part. Data centres are responsible for almost three per cent of electricity use in the UK and this is expected to double by 2020. Within the next 12 months Defra will be seeking compliance by the main Data Centres used for Defra systems. Signatories to the EU Code will be expected to implement the Code of Conduct s energy efficiency best practice, meet minimum procurement standards, and annually report energy consumption. This might mean that companies decommission old servers, reduce the amount of air conditioning they use, or maximise the use of a server by running multiple applications. The Government s work through its Market Transformation Programme (MTP) was instrumental in the development of the Code, which should help save 4.7 million tonnes of CO 2 over the next six years. This is equivalent to taking more than a million cars off the road. Over the next six years a successful implementation of the Code would allow UK businesses to save almost 700 million in electricity costs. DEFRA has already stated that whereas data centres were previously classified as industrial processes and exempt from the EPBD (EU Energy Performance of Buildings Directive), recent changes now require Member States to set energy performance standards for data centres. The articles most relevant to a data centre are: Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 6 of 49

Article 4 - Every Member State must set standards for energy performance in buildings. Exceptions are made but this is unlikely to affect data centres. Article 5 - New large buildings over 1,000 m 2 must have an evaluation for alternative options for design and fit-out (e.g. CHP). Article 6 - Existing buildings over 1,000 m 2 must undergo energy performance upgrades alongside any major renovation. Articles 4-6 are transposed through the Building Regulations 2006 in England and Wales (Part L). Similarly, these are transposed in Scotland (Section 6) and Northern Ireland (Part F) through building standards. The Code of Conduct for Data Centres is the only planned policy that aims to address these areas. The Code deals with the requirements for energy monitoring and subsequent calculation of energy efficiency metrics, in particular the ratio of power used by ICT equipment compared with the overall power consumption of the whole data centre. At present this is only averaging about 45%. The adoption of best practices should allow this to rise to over 75%. The associated Best Practices document lists the operational areas of a data centre, such as cooling, power supplies etc., and lists a range of improvements that can be applied at any time or at times of new build or major refurbishment. The value of such improvements, when comparing capital outlay against energy efficiency improvements, is rated from 1 to 5, with 5 being the best score. The conversion of a fossil fuel into useful data processing activity is seen as horrendously inefficient. IBM has stated that only 3% of the calorific value of a hydrocarbon fuel does any useful work by the time it has reached a server. HP 6 reduces this figure down to just 0.9% when defined as useful business processes. All this activity has led, across 2008 and 2009, to the generation of differing and competing organisations to produce different ways of judging the energy efficiency of data centres. The current status is summarised below. Organisation Metric Metric in development The Green Grid Data Centre Infrastructure Data Centre Productivity, Efficiency DCiE DCP US Department of Energy DCiE=IT Power/Total facilities power Power Usage Effectiveness, PUE PUE=1/DCiE 12 different assessments and metrics are listed, including DCiE, PUE and Site Energy Use Intensity DCP=useful work done by the data centre/resources consumed by the data centre US Green Building Council The Leader Environmental Design (LEED) Green Building Rating System Environmental Performance Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 7 of 49

Criteria Guide for New Data Centers DRAFT based on LEED NC 2.2 Energy Star BREEAM (BRE Environmental Assessment Method) is the leading and most widely used environmental assessment method for buildings. It sets the standard for best practice in sustainable design and has become the de facto measure used to describe a building's environmental performance The CEC has funded Lawrence Berkeley National Labs, LBNL to customize a data centre specific Environmental Performance Criteria Energy Star, from the US Environmental protection Agency, is developing a Data Centre Infrastructure rating system The BREEAM family of environmental rating schemes for buildings has been extended to cover data centres. Data centres present both a challenge and an opportunity in the development and implementation of sustainable design, construction and operation practices. Issues such as mission critical 24/7 operations; energy and water use intensity in data centres; are not addressed adequately in the current USGBC, LEED NC 2.2 kbtu / useful work. 1-100 scale Energy Usage Effectiveness (EUE) = Total Energy / UPS Energy The UpTime Institute Data Center Energy Efficiency and productivity (DC-EEP) Index DC-EEP=(IT-PEW) x (SI- EER) IT-PEW=IT Productivity per embedded watt SI-EER=Site Infrastructure Energy Efficiency (same as PUE) TUI say average SI-EER is 2.5 or 40% CORPORATE AVERAGE DATA EFFICIENCY (CADE) = Facility efficiency x IT asset efficiency Energy Consumption Ratio, ECR= Ef/Tf (expressed in Watts per Gbps) Where: Tf = maximum throughput (Gbps) achieved IT energy efficiency = Future energy efficiency metric for servers/ midrange/ mainframe, storage, network. etc Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 8 of 49

in the measurement Ef = energy consumption (Watts) measured during running test TPC Benchmark C is an online transaction processing (OLTP) benchmark At the present time the only widely accepted metric is DCiE (and its reciprocal, PUE) from the Green Grid. DCiE also makes an appearance in the EU Code of Conduct. Its advantage is that it is simple, being the total power consumed by the ICT equipment divided by the entire site power. So if a site was consuming 100 kw in total, and 50kW of that total was absorbed by the ICT equipment then the DCiE would be 50% and the PUE would be 2. The disadvantage is that it says nothing about useful work done by the ICT equipment. All the servers may be idle but the DCiE or PUE would not identify that. That is why many of the metrics mentioned above are investigating more meaningful measures such as useful work. However defining that, let alone measuring it, would still appear to be some way off. The Environmental Protection Agency believes that the historical average is about 2 for PUE, or 50% DCiE. The survey below, done by LBNL 9, Lawrence Berkeley National Labs, gives a PUE average of 1.83. The graph also demonstrates the variability between sites. Figure 2: PUE values Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 9 of 49

UEA Current position We measured the IT load across the two computer room as153 kw and we estimate the total power load to be 280 kw. This would give the following results; PUE 1.82 DCiE 55% CO 2 rating (using grid electricity) 2746 kgco 2 m -2 pa (5868 kw.hrs.m -2 pa) We can see from the LBNL survey above that the industry average is a PUE of 1.83 so that makes the University of East Anglia average and between the EPA s Current Operations and Improving Trends classification. Energy Star is developing an improved metric* called Energy Usage Effectiveness (EUE) = Total Energy / UPS Energy EUE is based on energy, not power Total Energy includes all fuels (electricity, natural gas, diesel, etc.) EUE is based on source energy, not site energy Source Energy is the total amount of raw fuel required to operate the building Results in equitable comparisons for buildings with different fuel types utilized *ENERGY STAR Data Center Infrastructure Rating Development Update Web Conference September 29, 2009 From the Energy Star s preliminary research the average EUE is 1.91. It should be noted that metrics such as PUE and EUE are generally taken as averages over a long period of time e.g. a year. One might expect that in temperate climates the PUE would decline in summer as more energy was taken by air conditioning equipment. This has been suggested by Google 7 but Energy Star found no such effect 8. The data suggests that any new data centre should achieve a PUE of better than 1.8. Google state they achieve a PUE of better than 1.2 in their new datacentres 7, but the containerised technique they use would not be regarded as conventional technology at the moment. Their methods are best suited to one customer owning thousands of identical servers, operating in the same location and all doing the same job. This level of PUE would be difficult to achieve in a conventional multi-client, multi-purpose co-location facility. We may summarise the driving forces behind the need to react to the green agenda under three headings; Economic forces o Designs that use less electricity will have lower operating costs. Most energy saving devices have a higher capital cost compared to conventional techniques, so any savings must be included within a Return on Investment, RoI model and Total cost of Ownership. TCO o Governments will impose various kinds of taxation or incentives related to the use of energy. Examples of this are Enhanced Capital gains Allowances for energy saving items The Carbon Reduction Commitment starting in April 2010. This applies to any organisation with an annual electricity bill of 6000 MW.hrs/pa. Although this is likely to be larger than any UEA data Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 10 of 49

centre plans the proposed data centre will be included if it is deemed that the UEA and the data centre are part of the same legal entity Grants and interest free loans from organisations such as the Carbon Trust for energy saving applications Legislation o The Carbon Reduction Commitment (also known as the CRC Energy Efficiency Scheme) applies to any organisation with an annual electricity bill of 6000 MW.hrs/pa o The Building Regulations, and Part L in particular, lay down requirements for energy saving measures o The Energy Performance of Buildings Directive requires energy performance Certificates, EPCs and inspections of HVAC equipment o Climate change levy Marketing o Data centres with green credentials will be more attractive for both public and private sector customers and we can expect various green certifications to be used extensively in any future data centre/collocation marketing campaigns. 4 Resilience models for data centres An essential part of the concept, design and implementation of a data centre is to understand wheat level of redundancy and resilience is to be built in. The common terminology is to describe systems under the headings of N, N+1, 2N and various associated combinations. N means enough items to fulfil a function, and no more N+1 means we have one more item than we need, implying that one item can fail and the system will still work 2N means two complete autonomous systems 2(N+1) would mean two autonomous system with each one in itself containing one redundant unit to give a multiple fault tolerant system The terminology is best demonstrated in the use of Uninterruptible Power Supplies, UPS, which can illustrate the philosophy in a block diagram format. Figure 3 Arrangement of power supplies to illustrate N, N+1, 2N etc. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 11 of 49

Several more detailed models exist around the world, such as the BITKOM 10 classification from Germany, but by far and away the most popular model is that created by The Uptime Institute in the USA, known as the TUI. The TUI started as an independent think tank to consider putting reliability models for IT systems in a practical descriptive form and they created the Tier model. Tier 1 has the lowest reliability and aligns with an N construction; Tiers 2 and 3 align with an N+1 model and the highest level is Tier 4 which aligns with a 2N model. The TUI has since been bought by an American commercial consultancy company called The 451 Group. Much of the Uptimes Institute s philosophy has been adopted in the TIA 942 data centre design standard and tiering levels are detailed for building construction, power supplies, air conditioning, cabling, security and all other functional requirements of a data centre. The tier levels are summarised below. Figure 4: Summary of Tiers 11 UEA current position We have not analysed the UEA design in detail but we believe the setup is basically aiming for an N+1 operation. This would generally align the University s computer room setup as a Tier 2 operation. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 12 of 49

Figure 5: Tier ratings in more detail 11 There is of course a cost for increasing levels of resilience, both in terms of capital costs and higher running overheads. The extra overheads come from items of electrical equipment that are lightly loaded, because there are so many redundant units in parallel that they are not running at their most efficient level. Capitoline s figures show that there is about a three to one price ratio in build costs from Tier 1 to Tier 4 with typical fit out costs ranging from about 3.5k/m 2 to 10.5k/m 2. These costs apply to the computer room floor space and exclude all core and shell costs and running costs and the costs it the IT equipment itself. They include all fit out costs including power supplies, cabling and air conditioning. The basic building costs do not differ too much from one Tier to another and indeed data centre buildings are usually quite simple industrial units. Statistics from the EPA 8 show a decline in EUE (Energy Usage Effectiveness) as the Tier rating goes up. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 13 of 49

Figure 6: EUE versus Tier rating 5 The data centre market The data centre market can roughly be split into; Build and run a data centre for your own purposes Rent space at someone else s Renting space at someone else s data centre usually goes under the name of colocation. Colocation itself splits into; Renting a rack or floor space to host your own equipment Renting active equipment, e.g. servers to run your applications Using the data centre to manage your applications for you on their equipment, e.g. web hosting Other terminology used is Carrier Neutral Hotel (CNH) Colocation stock: data centres where the operator allows any carrier to connect into the facility and to connect to third parties within the facility, not discriminating between different carriers and charging only nominal fees for interconnection. This is split into two distinct offerings: o Retail Colocation: targets smaller requirements in terms of floor space/it power and offering an element of colocation/managed services. o Wholesale Colocation: targets larger requirements in terms of floor space/it power and offering real estate FM services only. According to CB Richard Ellis 12, London provides 42% of the top level market, followed by Frankfurt at 28%, Paris at 14%, Amsterdam at 10% and Madrid at 6%. The London figure represents 260,000 m 2 of computer room space. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 14 of 49

Figure 7: Data centre stock availability Figure 6 shows the stock availability for data centre space in thousands of square metres. Despite floor space being added at a rate of about 9% per year the availability has gone down in London every year except for 2008. In the first two quarters of 2009 availability has gone down again, the overall London vacancy rate was 23.8% and the fully-fitted vacancy rate was 6.2%. This is indicative of a healthy market equilibrium. 12 The market is dominated by major datacenter collocation companies such as Interxion Equinix Digital Realty The Bunker Node4 Teledata There are also many smaller players in the market. There are various ways of pricing the collocation product offering, and we shall come onto the details later. According to the Tariff Consultancy 13 (TCL) the countries with the highest average per rack prices in the latest survey are Denmark, followed by a group of countries including Austria, Ireland, Switzerland and the UK, all of which have monthly per rack rates of above 1,000. The country with the lowest per rack price in the Tariff Consultancy survey is Italy followed by the Czech Republic. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 15 of 49

The largest country Data Centre markets are the UK, Germany, France & Spain for both raised floor space and revenues. As of 2010, the UK accounts for 16 per cent of the raised floor space followed by Germany (14 per cent), France (10 per cent), Spain (8 per cent) and Italy and the Netherlands (both on 6 per cent). Whilst raised floor capacity in all markets is projected to increase by an average of 14 per cent per annum, TCL projects that revenue is to increase by some 25 per cent per year over the period from 2010 to 2015. The health of the Data Centre sector is underlined by revenue per square metre being forecast to increase by an average of from 5 per cent to 8 per cent per annum. Data Centre revenue across 19 countries is forecast to increase from 3.2 Billion Euro per annum (2010) to 7.3 Billion Euro (2015) representing total growth of 125 per cent (an average of 25 per cent per annum) over the period. Figure 8. Preferred data centre locations According to Gartner most users would prefer to have their collocation data centre within easy driving distance: 59% of users according to their 2009 survey. Many regions are promoting themselves as ideal locations for a data centre, mostly based on low operating costs due to low ambient temperatures and/or low electricity costs. In the technical press over the last year adverts have appeared from the governments of Iceland, Sweden, Scotland and even Mauritius trying to persuade data centre operators to locate in their areas. In Figure 8 we can see a map of major private sector data centres 14 and the attraction of the London/M25 area is obvious. This area suffers however from high land prices, shortage of suitable sites and limitations on electricity grid capacity. In East Anglia (taken to be generally east of the A1) we can see some activity around Cambridge and also around Ipswich, presumably the old BT site at Martlesham. See Appendix 2 for details of the new Infinity data centre in East Anglia. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 16 of 49

F igure 9: Location of data centres England and Wales The pricing of the data centre collocation offer depends upon the principal service offered, e.g. rack space only, managed services etc. We will focus on the rack-space only part of the market. The principle unit of sale is. the rack. This may vary in size but will not be less than 600 mm wide and 1000 mm deep. For communications equipment, larger racks of up to 800 mm wide by 1200 mm deep are preferred. The most common height is 42 U. This means that it has space for 42 standard rack mounted Units, where a U is 44 mm high. This amounts to a rack about 2.1m high. Height is not a limiting factor as it is not generally possible to fill a 42U rack completely with active equipment as it would not be able to cope with the power and cooling demands. Power and cooling are the limiting factors. Racks are either rated by their power kilowatt capacity (kw) or by their current capacity in Amps. Power (kw) = voltage (volts) x the Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 17 of 49

current (amps) x power factor. The power factor of good quality IT equipment is generally about 0.95, so with a standard UK voltage of 230 V then a 16 amp load would be about 0.95 x 16 x 230 = 3.5 kw. The costs can be summarised as; An initial setup fee A monthly rack rental cost Electricity may be metered and charged for separately Communications links may be charged for and generally relate to the speed of connection on offer There may be other fees relating to local technical support services, or hands and eyes as it seems to be referred to. It is important to bring all fees together when trying to compare like for like and like any consumer one must be beware of hidden fees. Some companies have relatively low rental fees and very larges set up fees, and vice versa. The size of the fees will depend upon; The power or current capability of the rack The location of the data centre (within the M25 will attract a premium) The perceived reliability and security of the site, Tier 1 to 4 models etc The brand value if it s a major player Communication speeds available to the rack External telecommunications links available Length of contract commitment Dedicated secure caged areas and adjacent rack position guarantee Availability of 24/7 support, service and security personnel One would presume that all support services, especially the air conditioning, are designed to cope with the heat load generated. For all the services, one would expect at least an N+1 model. Typical fees for data centres located in England and Wales, December 2009, are; Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 18 of 49

Provider Location Basic service www.nofrillsinternet.co. uk/ www.netcom.co.uk London, Docklands London, Docklands Monthly rental Setup fee per Connection charges Electricity costs rack 42U rack 8 A 349 899 85/mbps Included 23U half rack 16 A 449 32 A 8 A 16 A 905 32 A www.bogons.net Gwent 47U 8 A 750 1200 Included 16 A 32 A www.serverspace.co.uk London 4U 0.5 A 16 A 150 70 Included www.aidatacentre.co.uk/coloca tion 250 32 A Stevenage 46U 8 A 896 995 Included 16 A 1695 32 A www.redstation.com Hants 42U 8 A 695? 100 Mb/s Included 16 A 1095 included 32 A www.saxondata.co.uk/c Gloucester 42U 8 A 636? Included olocation 16 A 772 32 A www.coreix.net/colo/st London 42U 8 A 1294 999 Included andard/42u.php 16 A 32 A www.c4l.co.uk Greenwich 42U 8A??? 16A 795 32 A www.interxion.com London 42U 8 A 4000 0.18p/ 16 A 885 kw.hr 32 A 1335 www.thebunker.net Newbury 42U 8 A 1000 50/mbps/ included 16 A 945 month 32 A 1585 NGD Gwent 42U 8 A 16 A 65280 0.09 p/kw.hr 32 A 690 A public sector Wales 42U 8 A 30000 0.09 organisation reselling 16 A p/kw.hr capacity to the public 32 A 175 sector From the survey we can see that prices vary widely and for some organizations with names like no frills internet one probably gets what one pays for. At the lower end of the market we can still see a great prevalence of 8 and 16 amp racks which would indicate older installations. If we take the last four suppliers in the list which we can take as being the professional top end of the market with 32 amp racks in new N+1/2N facilities. We can calculate the total cost of renting 20 racks in high quality space where each rack is averaging a 3 kw load, over three years. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 19 of 49

Supplier Set up Monthly cost electricity Total over 3 years 1 4000 1335 0.18 p 1,325,024 2 1000 1585 0 1,161,200 3 65280 690 0.09 1,944,312 4 30000 175 0.09 867,912 These figures lead to a normalised rack (32 amp) cost per month: Minimum Average Maximum 1205/month 1840/month 2700/month Another way to consider it is that income per month per amp supplied (in our 32 amp model) is 57.50/amp/month. The figures above show that there is some variability in the market and it is not a true commodity. Some suppliers want a very large up front investment to cover much of their own build costs. The figures do show what kind of return can be expected when offering high quality 32 amp data centre space to the market, i.e. around 1800 per rack per month. By considering average build costs we are now in a position to start estimating return on Investment models. 6 Data Centre design and Build costs We will consider a 20 rack rebuild project and then a small data centre based on 40 or 60 server racks. This will represent the University s options in renovating an existing room or building a medium-sized commercial data centre. 6.1 Cooling technology and models. We have seen that professional commercial data centre space is based around a 32 amp 7kW rack model. Many sites still offer 8 and 16 amp rack but it would not be seen as good design, or even cost effective to design something to those levels now. A 42U rack, limited to 32 amps, will only be able to be filled to within 50-60% of its physical capacity. If we try and put more equipment in then we will exceed the capacity of the power supply and/or generate so much heat that equipment will fail. If we try and completely fill a 42U rack with equipment, say five blade servers (representing 60 computers) then we will be drawing up to 25 kw. This starts to become very impractical both in terms of power supply and cooling. The limit of 32 amps/7 kw represents what is practically possible within the bounds of conventional air cooling. Some people claim they have achieved more than this figure but it either turns out to be unjustified or else racks have to be so spread out that the actual floor usage starts to go down again. 32 amps, single phase, is also seen as a standard power supply. To get more power in to a rack we could lay on more 32 amp connections but at this stage it would be more cost Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 20 of 49

effective to move to a three phase power distribution system. This is not that difficult but leads to another step up in complexity. The main difference in cooling above 7 kw is to change to water cooled racks. Water is much more effective at taking away heat than air so by pumping cold water directly into a heat exchanger, in or adjacent to the rack, that cooling capacities of up to 25 kw are possible. For cooling capacities up to 35 kw per rack then a liquid carbon dioxide cooling system is available. For cooling capacities up to 50 kw per rack then servers are available which can be plumbed in directly to the chilled water supply. This is almost a return to the mainframes of twenty years ago. We can summarise these technologies as; Technology Poorly designed air cooled racks Well designed air cooling Water cooled racks CO 2 cooled racks Cooling capacity per rack 1-4 kw 1-7 kw 1-25 kw 1-35 kw Air cooling Liquid cooling Advantages Disadvantages Advantages Disadvantages Lowest capital cost e.g. 1000 per fitted rack Limited to about 7 kw/rack High density of equipment, up to 25 Very high capital cost e.g. 10k per fitted kw with water cooling rack Known technology Not the most efficient Most efficient use of Centralised chiller plant in terms of electricity electricity needed consumption Not good for blade servers Good for blade servers We can see that moving to water cooled system needs a big jump in capital cost, up to ten times more, whilst offering three to four times more equipment density. Blade servers can be a problem for air cooled installations because they take up to 5-6 kw each. In an air cooled rack it may only be possible to fit one blade server (about 10U) in a 42U rack. Standard 1 and 2U servers, storage and networking equipment generally takes between 300 to 600 watts per item, hence the 50-60% fill ratio we see in a standard air cooled rack. Water cooled systems need centralised chiller plant to produce the cold water. The chiller plant itself has to be duplicated so it doesn t become a single point of failure. Air cooled systems tend to use distributed DX chiller units around the computer room. The amount of electricity used to produce a kilowatt of cooling is known as the systems Coefficient of Performance, CoP, or Energy Efficiency Ratio, EER. An air cooled system may have an EER of between 0.6 and 1. This means that 3 kw of cooling could cost between 2 and 3 kw of electricity to produce, depending on how good the design is. A water Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 21 of 49

cooled system may have an EER of about 0.5, so 2 kw of cooling takes about 1 kw of electricity to produce. A liquid carbon dioxide cooling system may have an EER of 0.33, so 3 kw of cooling takes about 1 kw of electricity to produce. This is considered to be about the theoretical limit. We can conclude that the choice of the cooling system and design capacity of the rack is the most fundamental design parameter that needs to be decided. Most data centres have concluded that it is most economic to use a standard air cooled method and make the most of the floor space they have. If limited space and a desire to host blade servers are the most pressing requirements then water cooled would be the way to go. Data centres offering water cooled 25 kw racks would still be seen as a very specialist end of the market. It would be possible for a data centre to have a mix of air and water cooled racks, if the infrastructure was designed for it. Google has concluded that it is more cost effective to use many standard air cooled 2U servers than a dense blade server environment 7. This report will now focus on an air cooled model. F igure 10: Effect of blade servers in a rack (source: Dell website) Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 22 of 49

Figure 11: water cooled rack Figure 12: One blade server mounted in an air cooled rack Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 23 of 49

6.2 Space required Once we know how many racks required then we calculate the amount of floor space needed. Capitoline s research has shown that on average a server rack needs 3.2 square metres of floor space to support it. So for our three models the computer room floor space needs to be; 20 racks 64 m 2 40 racks 128 m 2 60 racks 192 m 2 This spacing is derived from using A hot aisle/cold aisle format Standard 600 x 1000 mm server racks An 8-tile pitch, i.e. rack, 2 tiles for the cold aisle, rack, 2 tiles for the hot aisle, rack etc. A tile is 600 x 600 mm A minimum clearance of 1.8 m from A/C equipment An allowance for electrical and communications equipment Minimum pathways around the equipment of 900 mm to cope with fire and disability access regulations Figure 12 below shows a typical 200 m 2 computer room with 48 server racks and nine communications racks in an 8-tile pitch, hot aisle/cold aisle configuration. Apart from the computer room a data centre needs to supply a range of support services to make that computer room work. These range from Control rooms to office areas to M&E plant areas. Appendix 1 defines all the Spaces required in a functioning data centre. Some of these requirements will be combined in one room, some will be shared with other parts of the building, but one way or another these functions have to be in place for the data centre to work. We believe that even the smallest computer room needs about 275 m 2 support space to make it work if it s to have the title of data centre. of associated Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 24 of 49

Figure 13: Typical computer room showing one server rack per 3.17 m 2. The University of Kent, designed by Capitoline. 6.3 Requirements of the data centre To design our data centre we must decide on the following; 1. Size, from the number of racks we need. We will use 20, 40 and 60 2. Resilience level. A tier 3, N+1 is seen as the preferred professional level. Tier 4 installations are viewed as rare and expensive and for specialised applications Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 25 of 49

3. Decide on the rack power and cooling capacity. We will use 7 kw rack with air cooling 4. Employ all realistic energy saving devices and methods 5. Calculate the power capacity and cooling required 6. Design the computer room and space for all the support services 7. Design internal cabling and external communications interfaces 8. Design an appropriate fire detection, alarm and suppression system 9. Design an appropriate security, CCTV and access control programme 10. Take cognizance of all required legislation, e.g. fire, Building Regulations etc Having designed the best building layout we then have to 11. Find the right location 12. Establish the budget and financing 13. Build it To operate the data centre we need 14. Operational and business processes 15. Information Security Management System 16. Disaster recovery process 6.4 Costs of building the data centre Shell and core costs, on average, for the East Anglia region are 957/m 2 according to Building Design 15. This is for new build; another 233/m 2 should be added where demolition works and rebuild of an old site is required. From Capitoline s data we have an estimate of the fit out costs per square meter of the computer room, per Tier rating. Note this applies to standard air cooling using modern techniques such as enclosed hot aisle/cold aisle construction. Figure 14: Fit out cost per square metre. Approx 8k/m 2 for Tier 3 From above then we would need the following sizes for a new build project: Computer room Support area Total 40 rack 128 m 275 m 2 403 m 2 60 rack 192 m 275 m 2 467 m 2 Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 26 of 49

Build cost fit out cost (Tier 3) Total 40 rack 386k 1024k 1410k 60 rack 447k 1536k 1983k We estimate therefore that the build costs for a new, single story, Tier 3 data centre in the East Anglia region would be between 1.5m and 2m. Refurbishment costs of an existing room, e.g. a 150 m 2 campus computer room, would be of the order (using 233/m 2 as strip out costs) 1235k The two examples given below show the approximate distribution of fit out costs in various sized data centres. We can see that power and cooling typically account for around 60% of the fit out costs. Figure 15. Fit out costs in a large (8500m 2 ) data centre (Building Services Journal) Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 27 of 49

documentation 1% BMS 3% entrance room fitout 3% ins and fees 4% minor building works 3% raised floor 3% lighting 2% vesda 3% HVAC 20% data cabling 9% power 40% cable containment 0% FM200 6% CCTV/access 3% Figure 16: Fit out costs in a medium (200m 2 ) sized data centre: Capitoline 6.5 Operational costs The largest area of costs will be electricity and manpower. The cost of the electricity will depend upon the unit price and also the level of occupation and use. Staffing costs will depend upon whether the staffing level is normal working hours, 24/7 or something in between. Security costs would also depend upon whether the facility was completely standalone or within easy reach of other University locations. We can estimate the costs as Facilities management/repair 16 37k Staff, 4 @ 52k/pa 208k Security, 1 person 24/7 66k Electricity 40 racks at 3kW/rack 161k including overheads and PUE of 1.7 Rates/insurance @ 100/m 2 47k Contingency 50k Total annual cost 569k Over three years the build and running costs would amount to (assuming running costs won t vary much between the 40 and 60 rack model, apart from electricity) 40 rack building 2860k Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 28 of 49

60 rack building 3690k Cost per rack per month over a three year period 40 rack building 1986/month 60 rack building 1708/month If we compare this to the going rate for a three year contract in the high end of the commercial market; Minimum Average Maximum 1205/month 1840/month 2700/month We can see that a 40 rack data centre would probably never be economic in terms of return on investment. A 60 rack data centre would appear to be the smallest in terms of a purely commercial operation with an excess return of 132 per rack per month if we take the average selling prices. This model is sensitive to fully populating the data centre with paying customers. In this model 26 of the racks, or 43% loading factor is required just to pay the running costs. If we use these figures to consider the Return On Capital Employed, ROCE, after five years, and with a cost of capital at 4%, then we can see that fully populated data centre returns 4.73% after five years but an average loading of less than 48 racks (80%) would be a negative return. interest rate 4.0% period, years 5 1.216653 Build cost 1983000 annual cost 569000 Rack rental per month 1840 Rack loading 33% 66% 100% income pa 441600 883200 1324800 income after 5 years 2208000 4416000 6624000 Build cost, with cost of capital added 2412623 2412623 2412623 Running costs 2845000 2845000 2845000 Total 5257623 5257623 5257623 Income costs 3049623 841623 1366377 ROCE 0.58004 0.16008 0.259885 ROCE p.a. 15.93% 3.43% 4.73% Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 29 of 49

solve for breakeven after 5 years = 48 racks These figures reinforce the message that as a commercial concern then a data centre with 60 racks is the minimum sized viable operation. Of course there will be economies of scale with a larger operation, but only up to a point. Our figures show that the ideal computer room size is approximately 600 m 2. This is the lowest operating cost size. Computer room larger than that tend to cost more to run as the air conditioning, fire suppression and other technical issues lead to an even more complex design. In the past we have seen data centres with computer room floor up to 2000 m 2, however modern designs tend to build blocks or modules of between 400 to 600 m 2 at a time to arrive at the final overall desired floor area. Appendix 3 shows a tier 3 style data centre with a 500 m 2 computer room floor, adequate support facilities and CRAC units placed against the outside wall to provide a low cost, high efficiency air economiser HVAC solution. Presuming a reduction in average fit out costs from 8k to 6k per square metre then the five year ROCE at 100% utilization becomes 6.51% and the breakeven occupancy rate becomes 74%. According to CBRE 12 the average vacancy rate in UK data centres was 24% by which we may deduce that the occupancy rate was 76%. This is an average and we may presume that the higher quality end of the market fared much better. Digital Realty Trust claims an average occupancy rate of 93.9% for its data centres with clients such as Verizon, Fidelity, Yahoo!, ebay, Microsoft and AT&T. www. siliconrepublic.com Jan 2009 If we can draw some conclusions from this stage; 1. A new-build data centre should contain between 60 and 180 server racks to be cost effective 2. New build costs will be between 2m and 4m for the above sizing 3. Annual running costs will be in the order of 0.6m to 1.4m for the above sizing 4. The average rental rate for a 7 kw rack in good quality space is 1840/month. It could be up to 46% higher for a premium offering 5. Occupancy rates need to be in excess of 74% to be profitable, but this is slightly less than the actual average UK rate The above arguments have looked at the economic models on a purely commercial basis and may not always be applicable to a public sector body that may have other priorities and the ability to displace costs from other similar activities. However if the intention is to attract third parties, even other public sector bodies, then an understanding of likely costs and income is still essential. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 30 of 49

6.6 Commercial model of rebuilding CR2 Computer Room 2 at 249 m 2 would make a viable space for a colocation centre subject to finding enough space around it to serve as the stand-alone support functions, e.g. secure entrance, loading area, unpacking and build area etc. 248 m 2 would provide space for about 75 racks on the hot aisle/cold aisle model. If we presumed the UEA would want at least 15 for their own disaster recovery/backup requirements this would allow for sixty rack spaces to be rented out. The build cost for this space would be much less as the raised floor and much of the cabling, HVAC and power train could still be used. We would use a lower refit cost of 3500/sq m for this environment which would give a capital cost of about 868,000. If we add the same annual running costs at 600,000 p.a. we would have a three year total cost of 2.668 m. This would give a breakeven point of 40 (66% loading factor) racks over three years at the 1840 per month average rack rate. Any loading above this would return a significant profit. 7 What makes a data centre Green? In Chapter 3 we considered the Green metrics that allow the energy efficiency to be measured and we also saw that several competing schemes are coming to the market. We have also considered the resilience models and also the market dynamics and likely build costs. Green data centres are a market sector that is becoming a mainstream expectation within all new data centre projects and some companies are starting to claim completely carbon neutral operations. We can consider carbon neutrality from three viewpoints: Running the data centre as efficiently as possible Using non-fossil fuel energy sources Entering into a carbon offset scheme The easiest way to reduce energy consumption is to design and run the data centre as efficiently as possible. This may seem self evident but there are two drawbacks; Maximum efficiency does not equate to highest reliability. Redundant power and HVAC systems will not usually be running at peak efficiency because they are often very lightly loaded Nearly every energy saving device means a higher capital cost and the cost versus return of many products currently available in the market needs careful consideration Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 31 of 49

Figure 17: Typical energy overheads in a data centre The following is a list of energy saving ideas Low cost 1. Organise the racks in a hot aisle/cold aisle format 2. Use blanking plates in racks to prevent hot and cold air mixing 3. Use brush strips and grommets to prevent cold air following cable paths 4. Ensure raised floor areas are sealed and not leaking away expensive chilled air 5. Run temperatures slightly hotter and with wider humidity bands (ASHRAE 2008 20 ) Medium cost 6. Ensure the building is insulated to at least Part L Building regulations to keep the warm air out and the chilled air in and minimises solar thermal gain 7. Use virtualised, high performance servers to lower IT power overheads 8. Use enclosed cold or hot aisle racking systems 9. Use higher efficiency UPS (Uninterruptible Power Supplies) such as transformerless UPS and load them to their 40-80% optimum efficiency band 10. Load all three phase power systems to within 5% balance to minimise neutral conductor heating and harmonics 11. Buy IT equipment with high efficiency power supplies and with high power factors 12. Use power factor correction and voltage conditioning equipment for the whole site 13. Use simple airside economisers where possible to lower HVAC costs High cost 14. Instead of a battery-backed UPS use a rotary kinetic energy storage system 15. Use hot air return plenums Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 32 of 49

16. Use centralised chilled water systems rather than distributed DX HVAC units 17. Use more sophisticated air and water economisers, e.g. cooling towers 18. Use Dry-cooler HVAC heat exchange equipment 19. Large scale heat exchangers The ideas mentioned in Low and Medium costs are so cost effective that it would be foolish not to incorporate them into data centres. However one cannot presume that they all are currently used, especially in old data centres, due to lack of knowledge of these processes by data centres managers and builders. Two ideas from the medium cost section that we would like to highlight are: enclosed cold aisle racking and air economisers. Enclosed cold aisle The traditional method of rack layout in a computer room is hot aisle/cold aisle. This is considered very efficient as air is delivered to the front air intakes of the IT equipment via grids in the cold aisle and removed from the rear of the equipment in the hot aisles. Cold air and the hot return air are thus prevented from mixing to a large extent which would be a great source of inefficiency. The enclosed cold aisle idea takes this one step further by putting a roof over the cold aisle and doors at each end to totally enclose it. The cold air thus delivered in to the enclosed area, via the floor tiles, has nowhere to go but through the hot IT equipment. This method is believed to raise the efficiency of cold air actually used from; Poor layout in a whole volume cooling approach 30% Well designed and installed hot aisle/cold aisle layout 50-75% Enclosed cold aisle 90% Figure 18: Enclosed cold aisle layout Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 33 of 49

Air side economiser In the cooler latitudes it makes sense to make more use of the cool air naturally available. At the latitude of the UK the air temperature is below 20 0 for about 70% of the year. An air side economiser will dump the hot air from the computer room outside when the external temperature is below about 20 0 C and replace it with free cool air from the outside. When the external air temperature goes above 20 0 C then the economiser turns off and the system operates in a conventional air conditioning cooling manner. If we presume that an air economiser will subsidise the air conditioning by 50% for 70% of the year and air conditioning on average takes 35% of the electricity bill then we would expect a reduction of 0.5 x 0.7 x 0.35 or about 12% off the data centre electricity bill. The 60 rack model we have discussed would have an electricity load of about 600 kw or 5,256,000 kw.hrs/pa. At 0.09 per unit this would cost about 473,000 per year in electricity. A 12% saving would mean a saving of about 57,000 p.a. In the British climate we believe air side economisers to be a very cost effective addition. Figure 18 shows an air side economiser installation at Cheshire County Council designed by Capitoline. This 80-rack installation added air-side economisers for about 40k capital investment and is expected to achieve payback of less than three years. The air economiser must be used in conjunction with filtered and monitored outside air and be controlled by a Building Management System, BMS, in order to be truly effective. Figure 19: Air economiser installed at Cheshire County Council. Having the CRAC units placed directly beside the exterior walls makes the air ducting very simple to implement Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 34 of 49

Figure 20: Improvements in the DCiE at an American data centre after installing air economisers (Digital Realty) The ideas mentioned under the High cost heading need much more careful consideration when considering return on investment. It is worth explaining them in more detail. Kinetic energy storage systems. Most ICT installation have a battery backed UPS. These can be very inefficient due to the need to take the AC input and convert it to DC to charge the batteries, and then take it back to AC again. Efficiencies of around 88% are common. An alternative method is to use the mains electricity to drive a motor which then drives a generator. The motor-generator combination makes a much more efficient filter, around 97%, and the energy storage component comes from using a large rotating mass, i.e. a flywheel, between the motor and generator. The rotating mass has kinetic energy and will keep driving the generator for several tens of seconds should the input power fail completely and so give time for a standby diesel generator to start. This is of course a simplification of how they work and there are many variants on this theme. They are expensive capital items to buy and they seem to be most popular in the 1 MW+ class of datacenter where the 97% efficiency represents such a huge saving in electricity that they are justified. The downside is that the backup time is measured in only tens of seconds so that the diesel generators must be ready to fire up at the first invitation. Figure 21: UPS Efficiency 17 Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 35 of 49

Hot air return plenums The most common way of supplying cold air into a computer room is to pump it into a raised floor plenum zone where it can be delivered directly to the front of the computer racks by grids in the floor. Usually the hot air makes its way back to the top of the Computer Room Air Conditioning (CRAC) unit by natural convection. It would be more economic to capture the hot air directly above the racks by building a hot air return plenum, like a suspended ceiling, to capture the hot air and deliver it directly back to the top of the CRAC units. A hot air return plenum would also make an air economiser system easier to implement. The depth of the return plenum needs to be at least the same as the under floor delivery plenum which is typically 600 mm. Variations on this model include: Chimneys on the back of racks that deliver the hot air directly into the hot return plenum Hot aisle containment systems that enclose the hot aisle to facilitate delivery of that air into the return plenum F igure 22: Racks delivering hot air via chimneys into a ceiling return plenum Centralised Chilled Water cooling systems In Europe the most common form of air conditioning system is known as the DX or Direct Expansion system. DX systems work by allowing a refrigerant to evaporate in a coil (jn the CRAC unit within the computer room) and by so doing removes heat from the air flowing past it. The DX CRAC unit is then connected by pipes to a corresponding condenser unit outside where the refrigerant condenses and delivers its heat energy to the ambient external air. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 36 of 49

These are popular in the smaller system due to lower capital costs, simplicity of design and the fact that every unit is independent of any other so the system becomes very resilient. As installations get larger though it becomes more efficient to have a centralised chiller plant that produces large volume of cold water at about 6 0 C. This cold water is pumped into the same CRAC units and the heat removal mechanism is now the absorption of heat by the cold water. Cooling systems are measured by their coefficient of performance, CoP, which is also referred to as the Energy Efficiency Ratio, EER. EER is easily understood as it is the number of kilowatts of electricity required to produce a kilowatt of cooling capacity. It appears that the theoretical minimum is about 33%, i.e. 3 kw of cooling requires about 1 kw of electricity to produce it. The EER will decline as the ambient temperature increases as more work is required to lose heat across a smaller thermal gradient. Research has shown 17 that on average, due to poor layout, leaking air, blocked filters etc. that the average is more like 60%. It can be as low as 166%, i.e. 3 kw of cooling has taken 5 kw of electricity to produce. A well designed water-based cooling system should achieve an EER of 50% whereas a well designed DX system will probably only achieve an EER of about 66%. One must be careful with a centralised chilled water system that a single point of failure is being built in. At least two chillers are required to give an N+1 layout and all essential piping, valves and pumps must be duplicated. This of course raises the capital cost. Another advantage of chilled water system is that high density, water cooled racks can be implemented if a source of chilled water is available. Water economisers If one is using chilled water then there are other ways of cooling the warm return water other than passing it back through an energy intensive chiller unit. In a cooler climate the water can first be passed through a radiator so that it will lose some of its heat naturally to the environment and present less load to the chiller when it finally reaches it. A larger version of this is the cooling tower where there is an evaporative cooling effect as well. A cooling tower will of course consume water and steps to remove the risk of Legionnaire s Disease must also be implemented. Dry Cooler Conventional air conditioning relies on a compressor to raise the temperature of the refrigerant gas to a point, usually in excess of 50 0 C, so that it can lose its heat to the cooler ambient air. However if that air is already cool then it may be possible to switch off the compressor, which is the main energy-hungry ingredient of air conditioning, so that the refrigerant can lose some of its heat naturally to the environment via a separate radiator assembly, the so called dry cooler. This method appears to work when the ambient temperature goes below about 14 0 C. The disadvantage is the higher capital cost of the dry cooler equipment. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 37 of 49

Figure 23: Energy consumption of an 1100 kw chiller with and without a dry cooler. (Climaveneta) Large scale heat exchangers We have discussed traditional air conditioning as a way of removing heat from a computer room and also mentioned air side economisers whereby hot air may dumped outside (or re-used more profitably elsewhere) and cool external air is taken in rather than producing it in a chiller. Another method is to allow the hot air in the computer room to lose its heat to the outside via an intermediate heat exchanger. One such method is marketed under the name of kyoto-cooling. This rather tongue-in-cheek marketing name describes a large, two metre diameter, aluminum wheel that is placed half in and half out of the computer room roof. As it slowly rotates the bottom half picks up the heat of the room whilst the top half loses its heat to the cooler outside air. This will be effective from ambient temperatures below the mid twenties Celsius. Once again high capital cost is the main drawback to this and similar systems. F igure 24: Kyoto-cooling Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 38 of 49

8 The carbon footprint of a data centre If we look at the 140 rack model described in Appendix 3, and if each rack is rated at 7 kw but running at an average of 4 kw each then we can estimate the power consumption of the data centre to be IT load = 140 x 4 = 560 kw Power and other overheads at 18% = 101 kw HVAC load assuming cooling is 35% of the total load = 356 kw Which adds up to 1017 kw or approximately one megawatt of load. The floor area of this example building is 1008 square metres so the power density is about 1 kw per square metre and if running continuously over a year then the energy consumption is 8760 kw.hrs/pa. Using the government s 18 figures for grid derived energy of 0.442 Kg.CO 2 /kw.hr leads to 3872 KgCO 2 m -2 /pa. In terms of building energy efficiency this figure would not fair very well on a certificate. Data centres typically take more than fifty times the energy, on a square metre basis, than a typical office. There are only two ways to improve on this; use less energy by employing the techniques described in chapter 7 and/or use a less fossil carbon intensive source of energy. Other sources of energy include Wind turbines Photovoltaic electricity Combined Heat and Power, CHP, plants Biomass Absorption cooling Absorption and adsorption cooling is a method of producing a cooling effect from a waste heat source. The University of East Anglia is probably unique in having an alternative source of electricity currently being constructed on site which will provide a much greener source of electricity and a potential of efficient cooling by use of some of the waste heat generated. The biomass gasifier is currently being built and being commissioned. There has been a technical glitch with the wood chip feeder which has delayed the process. The current estimate is that the plant will be operational in March 2010. 19 It will have a capacity of 1.4MWe and 2MW thermal with the possibility of another similar sized gasifier being located in the building once the first one is working. The building has been built for two gasifiers. The current loading for the site is about 5MW and the existing infrastructure can go up to about 8MW we believe. So there should be room for a 1.1MW data centre. It can provide adsorption chilling to produce chilled water at about 2 degrees C. It is not known what cooling capacity would be available for a data centre project though the general thinking is that the present chiller is under utilised. A new one can be added so there should be plenty of capacity. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 39 of 49

Whether the electrical and thermal capacity of the gasifier plant can be used for new data centre project either on or off site is a political and economic question beyond the scope of this report. Even if only partly used the marketing impact of it would be significant. Replacing the government s grid derived CO 2 figure of 0.422 with the biomass figure of 0.025 kg.co 2 /kw.hr is a 94% reduction and would give our 1000 m 2 building example above a CO 2 figure of 229 KgCO 2 m -2 /pa. The above calculations have used the CO 2 figure of 0.422 which is still the current figure in the Building Regulations Part L. However a more up-to-date figure of 0.537 could be used instead which would give even greater levels of saving. This figure comes from the DEFRArecommended 2009 edition of the MTP Carbon Dioxide Emission factors for UK energy Use document. 21 It remains unclear at this stage whether or not the carbon reduction secured for electricity generation would be able to be passed on to other consumers. This is because UEA has yet to decide if it is going to sell the Renewable Obligation Certificates (ROCs) to a third party. Should it do so then the carbon reduction from the electricity generated by biomass gasification will be accrued elsewhere (by the purchaser of the ROCs) rather than by UEA. This would have an impact on the ability of UEA to market the data centre as green, though the potential reduction in carbon emissions from cooling would still be intact. The Carbon Reduction Commitment - Energy Efficiency Scheme comes into force in April 2010. UEA's operations fall under this legislation which means that UEA has to report its carbon footprint on an annual basis and purchase carbon allowances to cover the predicted carbon footprint for the coming years. The price of carbon allowances is fixed at 12 per tonne for the first two years and then is subject to auction. Analysts expectations are that the carbon price will rise under auction to between 30-60 per tonne. Constructing a new data centre on site would increase UEA's carbon footprint thus increasing the amount of and cost of carbon allowances. This would need to be taken into consideration in due course. References 1 TIA 942 Telecommunications Infrastructure Standard for Data Centers (2005) 2 U.S. EPA ENERGY STAR Report to Congress on Server and Data Center Energy Efficiency. Public Law 109-431 (2007) 3 Aviation and the global atmosphere IPCC, 1999 - J.E.Penner, D.H.Lister, D.J.Griggs, D.J.Dokken, M.McFarland (Eds.) 4 European Commission Code of Conduct on Data Centres Energy Efficiency Version 1.0 2008 5 Low or zero carbon energy sources strategic guide. Office of the deputy prime minister. HMG 2006 6 Heads up! Your carbon footprint is under scrutiny. HP s approach to sustainability. Hewlett Packard 2009. TUI Conference Proceedings 7 Efficient Data Center Summit April 1st 2009 Google Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 40 of 49

8 ENERGY STAR Data Center Infrastructure Rating Development Update Web Conference September 29, 2009 9 LBNL survey of the power usage efficiency of 24 datacenters, 2007 (Greenberg et al.) 10 Reliable Data Center Guideline, BITKOM http://www.bitkom.org/en/default.aspx 11 Tier Classifications define site performance, The Uptime Institute, 2009 12 European Data Centres, CB Richard Ellis, Q2 2009 13 Carrier Neutral Data Centre per rack space - April 2007 to April 2009 http://www.mobilepricing.com/ 14 Source Sunguard Availability, quoted in Data centres The backbone of the UK economy, 2009 http://www.intellectuk.org/content/view/23/3/ 15 Building Design 20-6-08 http://www.bdonline.co.uk/story_attachment.asp?storycode=3116045&seq=9&ty pe=t&c=1#ixzz0zbzdvd8b 16 Performance in higher education estates, EMS annual report 2008, HEFCE 17 Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions.Lawrence Berkeley National Laboratory Berkeley, California, July 2009 18 Building Regulations L2A, Conservation of fuel and power in buildings other than dwellings 2006 19 Biomass figures courtesy of Dr Simon Gerrard of the University of East Anglia, email 18-1-10 20 ASHRAE Environmental guidelines for datacom equipment, 2008 21 BNXS01: Carbon Dioxide Emission Factors for UK Energy Use, Market Transformation Programme, March 2009 Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 41 of 49

Appendix 1 Space requirements of a data centre. Space Computer room, server room, machine room etc Control room General office area Function To house the computer racks and communications equipment. A generic space with sufficient air conditioning, power supplies and communications cabling to allow a non-application specific IT environment with best use of space An area, adjoining the computer room, where all control and monitoring functions relevant to the site are concentrated An office area where the IT staff can work Telecommunications Entrance Facility Fire gas suppression store Electrical switch room UPS and battery room Generator room A room or area where all external communications cabling enters the building. It serves as a point of demarcation between different owners of cabling, provides a point for over-voltage protection and allows a transition from external (flammable) cables to internal cables. (Required BS 6701) Two required for T3/T4 If inert gas is used as the main fire suppression system then it requires a large volume for storage. Alternatively the gas bottles may be placed against a wall adjacent to the Computer Room or if a fluorocarbon gas is used (placed within the Computer Room) this area may be dispensed with A room where the external power cables enter the building and forms a point of demarcation between different cable owners plus all main switching and metering For loads in excess of 100 kva (TIA 942) it is recommended to have a separate UPS and battery room to save space and heat load in the main Computer Room To house the standby diesel generators. This may be in or adjacent to the main building Oil store To house the diesel fuel to run the standby generators for between 8 and 96 hours. This may be in or adjacent to the main building. The electrical switch room, UPS and generator must all be close to each other to minimise electrical losses in long power cables Storage and build area Delivery and loading area Main entrance Planning and meeting room An area to store and unpack equipment and to build items like racks without making dust and causing disruption in the main Computer Room An area adjacent to the main doors to allow heavy equipment to be shipped into the building A secure entrance with anti-piggybacking- airlock controls A room to hold meeting and provide additional office space Internal staff facilities Male/female/disabled toilets. Shower room. Basic dining area and kitchen facilities Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 42 of 49

Electrical substation Air conditioning condensers Main gate and hard standing External staff facilities Due to the power load it is likely that separate electrical substation would be needed by the utility company. This should be away from the main building to reduce EMC issues If split DX units are to be used then a condenser unit is needed for each Computer Room DX unit. These must be in a secure area either adjacent to the main building or even on top of it, but preferably not over the computer room itself A secure main gate leading onto hard standing area of sufficient space and strength for HGVs to unload heavy equipment and manoeuvre Parking space for cars, bicycle storage and smoking shelter Requirements of the spaces within an ideal data centre: From TIA 942 Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 43 of 49

Appendix 2. Infinity Dark Green Energy Data Centre Infinity Dark Green Energy Data Centre Takes a Step Closer http://infinitysdc.net/ Tuesday, 08 December 2009 The UK s first operational data centre to be powered entirely using on-site green energy has taken a step closer to completion with the commencement of installation of anaerobic digester plant and biomass generators at the Infinity ONE campus in East Anglia. The dark green energy produced will be delivered directly to client data halls. Martin Lynch, CEO, Infinity SDC said; the introduction of the Energy Act 2008 and its anticipated effect upon businesses is forcing innovation in the way data centres are operated. The ability to provide assured IT services with little or no carbon emission is becoming increasingly important to companies with large energy usage who will be affected by the Cap and Trade scheme which the Government is introducing. Today, the use of Green energy is not just a matter of conscience but a pragmatic financial and tax decision for CFOs. Situated within easy access of the main arterial routes to the City of London and with excellent fibre connectivity, the Infinity ONE campus provides up to 14,000m2 technical space in client-dedicated 500m2 data halls. Each will support power density from 1,000W/m2 to 2,500W/m2 with sufficient cooling capacity to ensure reliable and efficient operation. The data halls are laid out in a hot aisle/ cold aisle configuration; fresh-air cooling combines with the option for ground-source heat pumps to allow even greater energy and cost savings. Martin Lynch said; the Infinity ONE development proves that 21st Century data centres can be deployed with little environmental impact. The data centre and its dark green power source are surrounded by the farmland that supply them and provide a significant enhancement in income to the local agricultural community as well as providing organic nutrient to our diminishing soil quality it s a truly virtuous circle which supports two sectors which are vital to UK PLC. The new system enables dark green energy to be generated from bio-matter supplied by a local farming co-operative. The bio-matter is broken down in an Anaerobic Digester providing two by-products; methane and a nutrient rich digestate. The gas is used as fuel for the highly efficient CHP plant, while the fertiliser is returned to condition agricultural land. An advantageous fit between crop cycles means that both food and fuel can be produced on the same local farmland without degradation of the soil s organic composition. Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 44 of 49

Infinity ONE is the UK s first carbon neutral data centre campus and the site was specifically selected for its potential to support biomass, ground source and solar power generation. The project utilises re-purposed existing, highly secure buildings, as clientspecific data centres further reducing the carbon impact which would have been created through demolition, waste disposal and the erection of new structures. For more details, please visit http://www.infinitysdc.com About Infinity Infinity SDC provides a unique, outsourced data centre design, build and operations service to private and public sector organisations. Focused on measuring Total Cost of Ownership Infinity s service contracts, enable businesses to take full advantage of lower operating costs at a time when capital is highly constrained. The company is fully committed to reducing both the energy waste and the carbon footprint of the IT sector and has adopted recommendations within the EU Code of Conduct on Data Centre Energy Efficiency. Working in partnership with local agriculturalists, planning permission has been obtained for a sustainable, biomass-fuelled data centre campus using dark green energy generation on site at its Infinity ONE location. Infinity TWO is possibly the last major data centre development to be able to comply with current planning requirements inside the M25. This major development in excess of 100,000 square feet of technical space provides Tier 3 data centre capacity in a readily accessible location being just 15 minutes from Liverpool Street station but away from all risks including flooding. Both sites are located along major telecoms carrier s main fibre routes to Central London and utilise common design standards to provide with dedicated and resilient M&E systems to each of Infinity s customers. Together with high levels of physical security, the provision of independent power and cooling infrastructure helps ensure availability of IT services to all of Infinity s customers. Infinity SDC is funded by leading private banks having raised in excess of 23.5 million in recent funding round and holds assets of over 300,000ft2 for data centre development within the South East. Please visit www.infinitysdc.com for more information. Press Contacts: Damien Wells SPA Communications dwells@spacomms.co.uk Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 45 of 49

Appendix 3: Optimal sized data centre Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 46 of 49

Appendix 4 Values to be used in calculations from the Building Regulations L2A, Conservation of fuel and power in buildings other than dwellings Target Carbon Dioxide Emission Rate (TER) = C notional x (1 improvement factor) x (1 LZC benchmark) Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 47 of 49

Appendix 5: Existing computer room facilities at UEA There are two existing computer rooms at the UEA campus and on an inspection by Capitoline on 13-1-09 the following parameters were observed; Computer room 1 8.1 m x 21 m (170 m 2 ) with 3.7 m from finished floor to ceiling and a raised floor void of 200 mm. There are 52 racks with an approximate physical space loading of about 75%. The total power load for the IT equipment was 80 kw (86 kva with a pf of 0.94) Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 48 of 49

Computer room 2 21 x 11.8 m (248 m 2 ) with 2.62 m from the finished floor and a 500 mm raised floor area. There are 58 racks at about one third loading by space. The IT load in the room is 73 kw (77 kva, 0.95 pf) The total power load of these two rooms is of the order of 280 kw. The IT equipment within the current 110 racks over 418 m could be condensed into 40 racks taking about 140 m 2 if this was required. 2 Data centre report UEA 09 001 Issue 005 16 3 10 BJE Page 49 of 49