FEBRUARY Data centre design. 1.0 Purpose. 2.0 Disclaimer. 3.0 Introduction. 4.0 Physical location of the data centre

Size: px
Start display at page:

Download "FEBRUARY 2006. Data centre design. 1.0 Purpose. 2.0 Disclaimer. 3.0 Introduction. 4.0 Physical location of the data centre"

Transcription

1 Consultant Programme WHITE PAPER FEBRUARY 2006 Data centre design Contents 1.0 Purpose By Barry Elliott BSc, MBA, RCDD, C.Eng, MIEE engineeringeducation.co.uk Engineering Education Ltd 2006 Registered office 22 Foxcover Road Heswall Hills Wirral CH60 1YB 2.0 Disclaimer 3.0 Introduction 4.0 Physical location of the data centre 5.0 Sizing and capability audit 6.0 The hot aisle/cold aisle design concept 7.0 Specifying a raised floor 8.0 Equipment racks and cabinets 9.0 Heating, Ventilation and Air Conditioning (HVAC) within the data centre 10.0 Electrical systems to and within the data centre 11.0 Earthing, bonding and the Signal Reference Grid 12.0 Fire detection, alarm and suppression within the data centre UK Head Office: Connectix Limited 33 Broomhills Industrial Estate Braintree Essex CM7 2RW Tel: [email protected] 13.0 Communications cabling and containment 14.0 Security, access control and CCTV 15.0 Building Management Systems, from rack to room level 16.0 Tiering, H&S and other project management issues Appendix 1 Standards referenced Republic of Ireland Office: Connectix Limited 29 Westlink Industrial Estate Kylemore Road Dublin 10 Ireland Tel: [email protected] Document ref: EE-TDT ENGINEERING EDUCATION Ltd. Issue 001 License EEL05L02

2 1.0 Purpose This document is a design tool to assist designers to identify all the processes and activities required to fully define the requirements of a data centre to industry standards and best practice parameters. It will allow a preliminary design stage to be reached, with a client feedback loop, enabling full costed design proposals to be undertaken. 2.0 Disclaimer This document is intended for the use of persons qualified in the electrical, mechanical and construction requirements of a data centre. This document quotes figures and extracts from international standards but this does not absolve the user from full knowledge and usage of the original standards themselves. Every effort has been made to supply a complete and upto-date technical précis of the current international, European and British standards and regulations concerned but the fitness-for-purpose and final design remains the responsibility of the document user. Except where other documents have been quoted, this document remains the copyright of Engineering Education Ltd and its reproduction is forbidden under the Copyright, Designs and Patents Act Licences may be obtained from [email protected]. 3.0 Introduction A data centre is; A building or portion of a building whose primary function is to house a computer room and its support areas, according to TIA 942. This design guide is based upon the requirements of TIA 942 Telecommunications Infrastructure Standard for Data Centers, April Although this is an American standard invoking other American standards and codes it is far more substantive than the equivalent CENELEC EN Data centre standard, which is still at draft stage. However this document expands upon the TIA 942 standard and incorporates all the requirements of European and British standards, Directives and Regulations. These include EN 50173, EN 50174, EN 50310, BS 5839, BS 6701, BS 7671, the UK Building Regulations, the Disability Discrimination Act and many others. They are all detailed in Appendix 1. Many diverse areas need to be addressed to fully design and specify a data centre. It is essential that it is agreed at the start of the project exactly who is responsible for every item or else the final build will be severely compromised if a vital design element has been overlooked or is incompatible with other services.

3 A data centre design project can be split into the following sections; 1. Location 2. Construction 3. Definition of the spaces and size available 4. Planning the layout of the computer room floor 5. Designing the raised floor 6. Calculating day one and future IT requirements 7. Calculating the day one and future air conditioning requirements 8. Deciding upon the type and location of the air conditioning units 9. Calculating day one and future power supply requirements 10. Sizing and location of UPS and standby generators 11. Designing the earth bonding and signal reference grid 12. Designing the power distribution system within the computer room and within the equipment racks 13. Lighting, emergency lighting and signage 14. Access control, security and CCTV requirements 15. Fire detection, alarm and suppression system, including hand-held fire extinguishers 16. Specifying and designing the structured cabling system and its containment system 17. Organising connections to external telecommunications providers and the Entrance room 18. Integration of Building Management Systems with other command and monitoring networks and their appearance at a control room 19. Project management issues, health & safety and ongoing operational and maintenance issues Data centre projects are either green field new-build projects or conversion/renovation projects. In either case it is advisable to undertake a complete audit of what exists already or on the proposed designs. Apart from meeting the day one designs and proposed expansion plans it is also necessary to decide upon which level of backup or redundancy will be built in to the finished location. For data centres these levels are now designated as being of Tier 1, 2, 3, or 4, with Tier 4 being the highest level of redundancy. The Tiering level is described in great detail in the TIA 942 standard which in turn has taken much of its philosophy from the Uptime Institute. A very brief summary is given in the table below. In the terminology of redundant systems N means enough equipment to do the job, N+1 means one more additional unit to act as a redundant supply whereas 2(N+1) means two independent paths to complete the job. January 2006

4 Tier I Tier II Tier III Tier IV Site availability % % % % Downtime (hours/yr) Operations Center Not required Not required Required Required Redundancy for power, cooling N N+1 N+1 2(N+1) Gaseous fire suppression system Redundant backbone pathways Not required Not required Approved system Approved system Not required Not required Required Required The relationship of the Spaces within a Data centre January 2006

5 4.0 Physical location of the Data Centre Physical location and architectural audit Parameter Recommendation Ref. 4.1 Does the building and rooms exist, or are building works required? 4.2 Any known seismic problems? 4.3 Any known subsidence problems? 4.4 Any known flooding problems? Not on a 100 year flood plain TIA 942 (F.6) 4.5 Any known security/ criminal problems likely with this area? 4.6 Is connection to mains/telecoms services available? 4.7 Is there a very close proximity to main Should be 0.8km away from a TIA 942 (F.6) roads, railway lines, airports, oil or major highway and 0.4km away chemical storage or works from chemical plants, dams etc 4.8 Is there easy access to the site? 4.9 Are their lifts/goods lifts available if not on the ground floor? 4.10 Are there any excessive external noise sources? 4.11 Will this unit be a cause of noise or disturbance to adjoining offices? 4.12 Any potential EMC problems, Any interfering fields should be TIA 942 (F.2) e.g. mobile phone masts, lift motors less than 3V/m on the other side of a wall etc? 4.13 Is there access to a suitable external site for the air con heat exchangers? 4.14 Any other known safety or location issues that need to be recorded such as presence of asbestos? 4.15 Is the building or room susceptible to lightning strikes? 4.16 Is out of hours access to the site possible? Are there any issues concerning planning permission or conservation zones or building listing? 4.17 Are there separate office, storage, or parking areas available for contractors? 4.18 Has the room or design been audited to The requirements of the Disability comply with the Disability Discrimination Act may be taken Discrimination Act?* from; BS 8300:2001 Design of buildings and their approaches to meet the needs of disabled people Code of practice, and Building Regulations 2000 Part M Access and facilities for disabled people January 2006

6 5.0 Physical sizing and capability of the Data Centre Sizing and room capability audit Parameter Recommendation Ref. 5.1 What are the dimensions of the data centre? 5.2 What are the dimensions of the computer room? 5.3 What other areas have been allocated e.g. office area, entrance room etc? 5.4 What is the height of the computer room? Min of 2.6m from finished floor TIA Is the floor load capacity acceptable? The minimum distributed floor TIA 942 loading capacity shall be 7.2kPA. The recommended distributed floor loading capacity is 12kPA 5.6 Where are the doors and what are their Doors shall be a minimum of 1m TIA 942 sizes? wide and 2.13m high, without doorsills, hinged to open outward (code permitting) or slide side-toside, or be removable. Doors shall be fitted with locks and have either no center posts or removable center posts to facilitate access for large equipment. Exit requirements for the computer room shall meet the requirements of any other local requirements. 5.7 Is there lighting in place and is it adequate? Lighting shall be a minimum of TIA lux in the horizontal plane and 200 lux in the vertical plane, measured 1m above the finished floor in the middle of 4 all aisles between cabinets. 5.8 Is emergency lighting and signage Principally described in BS , BS fitted/planned? The Code of Practice For Emergency lighting, amongst others. Exit signage. Principally described BS in BS :2000 Safety signs, including fire safety signs. Code of practice for escape route signing 5.9 Will the emergency lighting require its To BS 5266 BS 5266 own battery back-up supply? 5.10 Is the basic décor acceptable?* Décor to be finished in a light colour TIA 942 with minimal glare and dust generation 5.11 Does equipment not related to the There should be no other services TIA 942 support of the computer room (e.g., passing through the computer room piping, ductwork etc.) pass through, or enter the computer room? 5.12 Is there a fresh water supply and drainage network available? January 2006

7 6.0 The hot aisle/cold aisle concept In trying to design a standardised, modular and upgradeable space for I.T. and communications equipment much thought needs to be given to rack location and the method of supplying power, communications and refrigerated air to it. The standard model has been defined by TIA 942, ASHRAE and other authorative sources as being based on a front-to-back cooling regime based on rows of racks facing each other. Cold air is supplied to the front of these racks through air vents placed in the raised floor in front of them. The chilled air is fed to these vents from air conditioning units blowing into the plenum space formed by the raised floor. The vented aisle is thus known as the cold aisle and the cold air is drawn through the equipment racks by the I.T. equipments own fans and expelled out of the back into what is now the hot aisle. The rising hot air from this aisle finds its way back to the air conditioning unit to be chilled and then to repeat the cycle. The fronts of the two facing racks are two whole floor tiles apart and when the depth of the rack and the necessary access clearance space behind it is taken into account we can see that the minimum realistic pitch before the process repeats itself is seven tiles. Feeding cold air through standard 25% open floor vents into a rack with no additional cooling methods normally limits the heat dispersion to about 2kW per rack, or about five average servers. Other upgrade paths are available to get more air through the rack and this will be explained in more detail later. A lot of communications equipment is designed for side-to-side cooling and so additional consideration needs to be given to cope with this variation but in general the hot aisle/cold aisle, 7-tile pitch system is generally considered to be the base model by the relevant standards and industry sources. January 2006

8 7.0 Specifying a raised floor The raised floor height will be based on 600 x 600mm floor tiles with an anti static finish to IEC and not less than 300mm in height. A guide to floor heights, when used as an air distribution plenum, comes from VDI 2054, Air conditioning systems for computer areas. Floor area Height of raised floor according to VDI m 2 Approx 400mm m 2 Approx 700mm m 2 Approx 800mm >2000m 2 >800mm Other guidance for floor height comes from; mm, IBM 450 min, 600 ideal, SUN mm, BS EN mm min, TIA 569 The Property Services Agency (PSA) Method of Building Performance Specification 'Platform Floors (Raised Access Floors)', MOB PF2 PS, became the de facto industry standard in the UK for about 20 years until the recent arrival of the BS EN 12825:2001 specification. In July 2001 a European Standard EN Raised access floors, was approved by CEN as a voluntary specification for private projects and mandatory for public projects. For the floor strength the minimum distributed floor-loading capacity shall be 7.2kPA. The recommended distributed floor loading capacity is 12kPA (TIA 942). From MOB PF2 PS and BS EN this means specifying Heavy Duty or preferably Extra Heavy Duty floor grade. The plenum area formed under the raised floor must be clean, sealed, dust free, fitted with a vapour barrier and sealed to a level of air permeability of at least 3m 3 /h/m 2 at 50 Pa (Building Regs Part F). The reasons for pressure sealing the plenum area are; Chilled air conditioned air will be able to escape through poorly finished floor tiles and service penetrations, leading to; o More electricity consumed to replace that air o An inability to deliver the volume of chilled air required at the floor vents o A variation in air pressure across the floor leading to an inability to deliver chilled air at the air vents Unsealed service penetrations (cables/pipes etc) into the plenum area are a fire risk and will allow the spread of fire and smoke into or out of the computer room (Building Regs, part B) Gaseous fire suppression systems rely on lowering the level of oxygen available to fires and depend upon a sealed area to work in to prevent oxygen from re-supplying the fire. BS ISO P1: 2000(E), Gaseous fire-extinguishing systems. Physical properties and system design. General requirements, requires a pressure test every twelve months January 2006

9 An aspirating (early warning) smoke detection system shall be placed in the plenum zone (TIA 942). Where a need for a fire suppression system in a sub floor space is deemed appropriate, consideration should be given to clean agent systems as a means to accomplish this protection (TIA 942). The under floor area must not be used for any other purpose other than the supply of air and the distribution of cables. Cables must be fire rated according to the local jurisdiction and must be placed so as not to impede airflow. All redundant cables must be removed (National Electrical Code 2002). January 2006

10 8.0 Equipment racks and cabinets Computing and communications equipment has been located in racks, usually 19-inch based, for at least the last thirty years. Racks, or frames, come in all shapes and sizes; from a few hundred millimetres high to over two metres high; 600 or 800 mm wide and from 600 to 1200 mm deep. The internal fittings are usually based on a 23-inch pitch for telecommunications and 19-inch for everything else. A handful of EIA, IEC and ETSI standards cover the physical dimensions of the rack, such as EIA-310-D. The vertical spacings for the installed equipment are based on Rack Units, or just U, where one U is 44 mm. The main frame of the rack can be based on a four-post construction, i.e. to make a rectangular frame, or the space-saving two-post system which is essentially two pieces of vertically placed metal spaced 19-inches apart (apologies for mixing metric and imperial units here but that is the common practice!). A server rack needs to be a four-post enclosed unit. The purpose of the rack is; To hold and securely locate electronic equipment To provide an organised routing for power and communications cabling To assist in the airflow and cooling of the equipment To provide the above in an aesthetically pleasing construction 8.1 Size Racks/cabinets are usually 600 mm wide and with a useable internal space of 42U for 19-inch rack-mounted equipment. This gives a rack height of just over two metres. Slightly larger (and of course smaller) versions are available but 42U seems a popular choice. Depth is at least 800 mm but may be up to 1.2 m. A one-metre depth allowance seems average. TIA 942 states Refer to ANSI T1.336 for additional specifications for cabinets and racks. In addition to the requirements specified in T1.336, cabinets and racks heights up to 2.4 m and cabinet depths up to 1.1 m may be used in data centers (although 2.1 m is recommended). Cabinets should have adjustable front and rear rails. The rails should provide 42 or more rack units (RUs) of mounting space. Rails may optionally have markings at rack unit boundaries to simplify positioning of equipment. Active equipment and connecting hardware should be mounted on the rails on rack unit boundaries to most efficiently utilize cabinet space. If patch panels are to be installed on the front of cabinets, the front rails should be recessed at least 100 mm (4 in) to provide room for cable management between the patch panels and doors.

11 8.2 Ventilation This is a key area of differentiation between standard equipment racks and server racks. A server rack must cope with the ventilation demands of many kilowatts worth of electrical equipment. A standard glass-fronted rack with horizontal fan tray fitted can only cope with the cooling demands of less than a kilowatt. It would appear that a suitably ventilated rack, supplied with adequate chilled air through a standard floor tile, can cope with about two kilowatts of heat dissipation, where the motive force through the rack is only provided by the fans within the server units themselves. The amount of ventilation required is stated by several sources and is expressed as a ratio of open space to overall door area, e.g.;..servers require that the front and back cabinet doors to be at least 63% open for adequate airflow. SUN One method of ensuring proper cooling is to specify a rack doors that provide over 830 in2 (0.53 m2) of ventilation area or doors that have a perforation pattern that is at least 63% open. APC Racks (cabinets) are a critical part of the overall cooling infrastructure. HP enterpriseclass cabinets provide 65 percent open ventilation using perforated front and rear door assemblies. To support the newer high-performance equipment, glass doors must be removed from older HP racks and from any third-party racks. HP the cabinet should either have no doors or, if required for security, doors with a minimum 60% open mesh for maximum airflow and is best not equipped with top mounted fan kits. Chatsworth Ventilation through slots or perforations of front and rear doors to provide a minimum of 50% open space. Increasing the size and area of ventilation openings can increase the level of ventilation. TIA 942

12 When the heat load goes above about 2 kw (about 5 average servers) then an escalation policy is required, which can take the form of; Increasing floor tile vent size up to 75% open area Replacing floor tiles with fan assisted grate tiles Adding specialised fan units to the top and/or bottom of the rack Using cabinets where the entire rear door is a fan unit The above solutions will take the heat dissipation capability up to about 6 kw per rack. Above that then more specialised racks need to be used where the whole rack is fed by a chilled water supply. These designs can cope with loads in excess of 20 kw. New designs using liquid carbon dioxide claim cooling capacities of over 30 kw per rack. It is also important that the front to back cooling scheme adopted in such racks is not compromised by gaps in the rack allowing cooled air to mix with hot air drawn back through the gaps (Thermal Guidelines for Data Processing Environments ASHRAE). For this reason all gaps in the rack must be filled in with blanking plates. Also excessive gaps for cabling at the side of the racks should be sealed with an air dam kit and any cable entry points at the bottom of the rack should also be sealed with a brush strip. 8.3 Power The rack needs to be powered and in Europe this would generally be provided by a 16 or 32 amp, 230 V single phase feed through an IEC connector. At least two feeds are required for redundancy and backup purposes so a dual 32 amp feed would be counted as supplying 32 x 230 = 7.36 kva (remember that useful power is measured in watts, which is amps x volts x power factor). For loads above 7 kva then either more 32 amp feeds are supplied or a three-phase supply is provided which would normally deliver at least 22 kw through a five-pin version of the IEC connector. For three-phase supply Regulation of BS 7671 requires a warning notice to be secured in such a position that the warning is seen before access is gained to live parts. Within the rack the power is distributed by what is widely known as a power distribution unit, or PDU. There does not seem to be a widely accepted definition of a PDU and at its simplest it is just a power strip of sockets that distributes the incoming electricity to the rack equipment. However more functionality is available in the form of; Sequential start up Automatic crossover switch between two supplies Power line conditioning Reporting function about status and power usage. This in turn may be a simple LED readout on the unit or part of an IP addressable managed system

13 8.4 Control and monitoring A data centre server rack must be secure and be able to monitor and report its environmental status back to some central control point. The monitoring system may be part of a buildingwide Building Management System (BMS), an add-on localised monitoring scheme or a built in rack-monitoring scheme designed and dedicated to the task. TIA 942 states A Building Management System (BMS) should monitor all mechanical, electrical, and other facilities equipment and systems. The rack sensor system should be able to detect the following; Temperature Smoke Water Humidity Access Vibration Airflow Particles in the incoming airflow And respond with one or more of the following; Visual alarm on top of cabinet Audible alarm Networked alarm CCTV 8.5 Rack location The standard model described in TIA 942 and elsewhere depends upon the hot-aisle/coldaisle concept described in section 6 of this document. In this model chilled air is pumped out into the plenum/raised floor area beneath the racks and made available by vented floor tiles placed in front of the racks. This also requires the 7-tile pitch approach.

14 5.0 The 7-tile pitch requires that the front edges of the two facing cabinets are placed in line with the edge of a floor tile, and two complete floor tiles, i.e. 1.2 m, separates the two facing cabinets, thus forming the cold aisle. The depth of the rack will cover about one and a half floor tiles and so a complete floor tile is needed in the hot aisle for access. This arrangement means that the set will repeat itself every seven tiles, or 4.2 metres Apart from the 7-tile arrangement TIA 942 also requires clearances of a minimum of 1 m of front clearance for installation of equipment and a minimum of 0.6 m of rear clearance for service access at the rear although a rear clearance of 1 m (3 ft) is preferable. Some racks have split rear doors to facilitate rear clearance. IEEE 1100, referenced in TIA 942, suggests a clearance of two metres from building structural steel in case of lightning flashovers. 8.6 Cable management Cables may enter from the top or bottom or both of the rack. If coming up from the bottom then a cable brush seal is required to prevent chilled air from entering and confusing the frontto-rear airflow scheme. All cables shall be neatly dressed and secured with minimum bend radii protected according the standards or manufacturers instruction. All cables must be adequately labelled as described in TIA 942, TIA 606 and elsewhere. A vertical cable manager shall be installed between each pair of racks and at both ends of every row of racks. The vertical cable managers shall be not less than 83 mm in width. Where single racks are installed, the vertical cable managers should be at least 150 mm wide. The cable managers should extend from the floor to the top of the racks. Horizontal cable management panels should be installed above and below each patch panel. The preferred ratio of horizontal cable management to patch panels is 1: Health and Safety Equipment racks can be very heavy, over 500 kg. It is essential that; 1) The concrete floor beneath the raised floor is strong enough, and finished flat 2) The raised floor is strong enough, the pedestals are securely fixed and the floor is finished flat 3) Equipment racks are leveled onto the raised floor 4) Local seismic regulations for fixing are obeyed 5) Heaviest equipment is placed at the bottom 6) Extendible stabilizers are used when sliding heavy equipment out of a rack 7) Racks are bayed together 8) All racks, including doors, are earthed according to local regulations 9) Any removed floor tile positions are surrounded by warning signs

15 9.0 Heating, ventilation and air conditioning TIA 942 recommends that the following conditions be maintained in the computer room; Relative humidity: 40 to 50 % dry Bulb Temperature: 20 C to 25 C Max dew point: 21 C Max rate of change: 5 C per hour A positive pressure will be maintained with respect to surrounding areas The precision air conditioning facility must be available 24 hours a day, 365 days per year and connected to the standby generator in the event of a mains failure. The ambient temperature and humidity shall be measured after the equipment is in operation. Measurements shall be done at a distance of 1.5 m above the floor level every three to six metres along the center line of the cold aisles and at any location at the air intake of operating equipment. Temperature measurements should be taken at several locations of the air intake of any equipment with potential cooling problems. Details are contained in Thermal Guidelines for Data Processing Environments. Air conditioning may be achieved by either; Direct expansion Computer Room Air Conditioning units (CRAC) in the computer room Centralised chiller units supplying chilled water to heat exchange units within the computer room Chilled water supplied directly to heat exchange units built into equipment racks Or any combination of the above. Small to medium sized data centres tend to go for the direct expansion, DX, CRAC units placed in the computer room. Larger facilities tend to go towards the centralized chiller and cold water distribution. Directly cooled racks have so far tended to be an upgrade path when conventional room cooling runs out of capacity but there is no reason why they couldn t be designed in from the start, especially when floor space is at a premium. The mathematics of air conditioning shows that to remove one kilowatt of heat and cool an item by around 11 C, approximately 160 cfm (cubic feet per minute) or 74 litres/second of air needs to flow through that equipment. The literature suggests that in practice an adequately constructed and sealed raised floor, supplied with adequate chilled air, can supply about 320 cfm of air through a standard 25% floor vent, which implies that one floor vent, in these circumstances, can cool around 2 kw of equipment if placed in front of an equipment rack. There are many variable in this equation, e.g. Are the CRAC units supplying a sufficient volume of air at the correct temperature? Is the underfloor plenum area deep enough and clutter free to allow free airflow? Is the underfloor plenum sealed enough to maintain the correct excess air pressure? Is the excess pressure evenly distributed around the floor area? This in turn depends upon the above factors, plus depth of floor void and number, size and location of other floor vents

16 A successful design thus depends upon; Designing an appropriate raised floor Sizing the air conditioning requirements in light of day-one load, future expansion and redundancy requirements Correctly positioning the CRAC units and air return path Correct location of the equipment racks in the hot-aisle/cold aisle format with the 7-tile pitch layout Correct location of the floor vents to deliver the chilled air directly to the rack Correct construction and loading of the rack to maintain the desired airflow Correct location of the external air handling/chiller units, which need a strong mounting plinth, a secure area and electrical and plumbing connections Up until the early part of this century the average heat load developed in a rack was only around 1 kw and cooling did not need to be a closely controlled activity, as simple whole room cooling would suffice. But now with 1U servers and blade servers the potential heat generation is enormous. The average server has a running load of about 400 watts, meaning that a 2 kw cooling capacity equates to only fiver servers per rack. Putting 42 of these servers in a rack, just because they fit, would develop over 16 kw of heat, and blade servers would generate over 20 kw. Underfloor plenum cooling can supply about six kw of cooling capacity by the use of one or more of the following upgrade methods; Use a larger floor tile, up to 75% open area Use a fan assisted floor grate Use specialised blowers in the rack to bring more airflow into the rack and distribute it across the front face of the equipment Use rear doors on the racks that are full length blower units Beyond about six kw, underfloor plenum cooling of racks becomes impractical and the next stage is water-cooling of the entire rack. Water is much more effective at removing heat than air. A water-cooled rack can dissipate in excess of 20 kw of heat. These racks need to be plumbed into an existing chilled water generation and distribution system that would need to be placed outside of the equipment room. Liquid carbon dioxide cooling plants are also available now. CO2 is even more efficient than water and can remove in excess of 30 kw of heat from a rack. Directly cooled racks are thus much more efficient in terms of floor space used but they are more expensive to buy, need plumbing in, and an external chiller plant still needs to be built. For air conditioning applications for more than a medium sized rectangular computer room, it is advisable to use a computational fluid dynamics software program to model the airflow and cooling capacity of an HVAC design.

17 Airflow in a standard hot-aisle/cold aisle model The diagram above shows the CRAC unit as the source of the chilled air and pumping it into the underfloor plenum space. Air escapes into the cold aisle through the floor vents, passes through the racks, cooling them on the way, and appears in the hot aisle, where it rises. It then returns back to the CRAC unit to repeat the process. The CRAC units are located at the end of the hot aisles to facilitate the shortest return path back to the CRAC. Once the room goes over a certain size it is advisable to improve the return path by adding a ceiling plenum, with fans, to scavenge the hot air and direct it back to the CRAC units. It has been suggested that this would be beneficial once the floor area extends beyond 400 m2, although a dedicated return plenum would benefit any size computer room. Another item to take into account is locating the floor vents at the correct distance from the CRAC unit. Too close and the air velocity will cause a negative pressure at the vent relative to the air in the room above and suck in hot air instead of blowing cold air out. The minimum distance is about two metres before effective cooling takes place. The maximum distance from the CRAC unit again depends upon factors such as air volume from the CRAC unit, floor depth, obstructions, number and size of floor vents etc., but a figure of ten metres seems to be commonly accepted. Some items, particularly communications equipment, are not designed for front-to-rear cooling but side-to-side cooling, or even both at the same time! Side-to-side items may be cooled by; Placing in a low density environment on a two post frame with chilled air generally supplied from a floor vent Placing in a standard server rack with a front-to-side cooling converter fan fitted Chilled water cooling matrices placed at the sides of the open frames that will allow chilled air to be directed in a side-to-side direction

18 APC, a major supplier of IT air conditioning, offers the following estimating tool to help calculate the cooling capacity required of a computer room. Note the usual running load should be used for the IT equipment, not the nameplate rating, which is usually one third higher than the normal running load. The battery/ups calculation is only required if the battery/ups system is in the same computer room. TIA 942 recommends that UPS systems greater than 100 kva be placed in another room. Note that allowance should also be made for future expansion and redundancy in air conditioning calculations. Item Data required Heat output calculation Heat output subtotal IT equipment Total IT load power Total IT running load, not in Watts nameplate values Watts UPS with battery Power system rated (0.04 x power system power in Watts rating) + (0.06 x total IT load power) Power Distribution Power system rated (0.02 x power system power in Watts rating) + (0.02 x total IT load power) Watts Watts Lighting Floor area in sq m 21.5 W/sq m Watts People Max No. of people 100 W per person Watts Total Watts Fresh air Even with air conditioning, the computer room needs to be ventilated. Air should be changed at least ten times per hour. British building regulations also require an air supply of ten litres per second per person, doubling if printers or photocopiers are in use. Incoming air must be filtered with airborne particulate levels maintained within the limits of Federal Standard 209E, Airborne Particulate Cleanliness Classes in Cleanrooms and Clean Zones, Class 100,000. Air from sources outside the building should be filtered using High Efficiency Particulate Air (HEPA) filtration rated at 99.97% efficiency (DOP Efficiency MIL-STD-282) or greater. As the external temperature at British latitudes is below 22 C for about 70% of the year some of the huge electricity bills associated with cooling data centres can be mitigated by taking even larger volumes of outside air during the autumn, winter and spring months, with the minimum ventilation rate maintained for the summer months.

19 10.0 Electrical systems 10.1 Mains input power requirement The data centre must be supplied with sufficient electrical power to cope with day-1 demands and foreseeable expansion plans. Suitable back-up equipment, e.g. Uninterruptible Power Supplies, UPS, and standby generators must also be considered. The design goals are to specify; The main electricity feed from the utility company The distribution system around the data centre Power distribution within the equipment racks The UPS/Standby power system The power distribution system also needs to be planned in accordance with the Tier 1 to 4 requirements of TIA 942. The first step is to understand the quantity of power required, at day one and when expansion plans are taken into account. Some general rules; nameplate values can be derated by 33% for normal running power UPS efficiency is typically 88%, i.e. 12% of the input power is consumed Recharging UPS batteries need 20% of rated power Lighting; allow 21.5 W per square metre Air conditioning can take 100% of its rated cooling capacity The normal I.T. running load is therefore the sum of all the nameplate ratings in all of the equipment, multiplied by about To size the power supply requirements however a number of conservative assumptions are made such as allowing for the inrush current when the equipment starts and general overating factors as a margin of safety.

20 1. Add up all the nameplate ratings of all the equipment and multiply by 0.67, this is the day 1 running load 2. Multiply this by whatever expansion factor is expected to apply to the data centre 3. Add 50% to the above to allow for inrush current 4. Add 32% to allow for the UPS inefficiency and battery charging requirement 5. Add 21.5 W per square metre of floor space to allow for lighting 6. Double the amount reached so far to allow for air conditioning power requirements 7. Multiply the total so far by 1.25 to provide a further overating factor, so that cables aren t expected to work at their full safe load 8. Add a figure; say 5% for power factor correction*. Modern I.T. equipment is usually power factor corrected, but there will be some power factor loss The figure thus arrived at is the amount of power that needs to be available in the data centre, even though it is unlikely to need this full amount under normal conditions. This figure also leads to correct choice of the standby generator. Let s take the example of a 200 square metre computer room with a day one nameplate load of 100 kw and a required expansion capacity of 100%. Day one running load = 100 x 0.67 = 67 kw Long term load, after expansion = 67 x 2 = 134 kw Add 50% for peak load factor = 134 x 1.5 = 201 kw Add 32% for UPS inefficiency and battery charging = 201 x 1.32 = 265 kw Add 21.5 W/m 2 for lighting = 265+(200x.021)=269 kw Double this amount for power to run the air con = 269 x 2 = 538 kw Multiply by 1.25 for the overating factor = 538 x 1.25 = 672 kw Add 5% for power factor correction = 672 x 1.05 = 706 kw So we can see that the power supply to be designed in is more than ten times the day-one running load. *Power factor. Remember that current times voltage equals volt-amperes, usually expressed as kva. Useful work, or power, is measured in watts, and volts x amps x power factor = watts. The power factor is the cosine of the phase difference between the voltage and the current in an alternating current circuit. This phase separation is caused by a reactive, i.e. capacitive or inductive, load. UPS systems are always measured in kva output, as they do not know the power factor of the load they will be connected to, and hence the real power, in watts, deliverable.

21 10.2 UPS and backup requirements Having understood the sizing implications the next step is to consider the methods of back-up and redundancy and how this fits in with the Tiering philosophy of TIA 942. TIA 942 summary Tier 1 Tier 2 Tier 3 Tier 4 No. of delivery active and 2 active paths 1 passive Utility entrance Single feed Single feed Dual feed Dual feed from different substations Equipment Single cord with Dual cord with Dual cord with Dual cord with power cords 100% capacity 100% capacity on 100% capacity 100% capacity on each cord on each cord each cord Generator fuel 8 hours, but no 24 hours 72 hours 96 hours capacity generator required if UPS backup time is more than 8 minutes Redundancy N N+1 N+1 2N Remember that; N means only enough items to do the task at hand. Any one point of failure will stop the system N+1 means one more item than is necessary, thus allowing for one point of failure 2N means two complete independent paths Going to 2N, or even better 2N+1, will give the required resilience that a data centre needs but obviously at some major cost, and not surprisingly 2N costs at least twice that for the provision of the minimum required service. An uninterruptible power supply system (UPS) needs to be defined to back up the power supply system. This is usually based on batteries and a double conversion on-line UPS. In this method the incoming AC is rectified and permanently charges a battery pack which is also connected in parallel back into an inverter, to make available the mains voltage AC again. This is a very reliable method and also isolates the I.T. load from sags, surges, spikes and most harmonics coming in from the mains supply. The downside of this method is that it is very inefficient with up to 12% of the input power wasted in the rectification-inversion cycle. Other kinds of UPS are available and one is based on the kinetic energy of a large rotating mass connected to a device which acts as a motor when input power is available and a generator when the AC input fails. The kinetic energy stored in the rotating flywheel will then produce electricity for a short time. Kinetic energy devices are smaller and cheaper and have less maintenance associated with them but usually have back-up times measured in tens of seconds rather than the minutes offered by a battery system.

22 UPS design options and requirements; 1. Size the electrical power required, in kva 2. Decide what is the critical load that needs to be backed up with a UPS. Some people include the air conditioning, and some don t, expecting that the back-up generator will be online before the equipment overheats. Backing up the aircon with the UPS will double the size of the UPS 3. Decide upon the length of time the battery pack needs to backup the system. Battery packs are expensive, heavy and take a lot of space, recommendations are; a. TIA 942, 5-30 mins b. SUN, 15 minutes c. Note that TIA 942 also specifies that a Tier 1 system does not need a generator if the battery system can backup for at least 8 minutes 4. Decide upon the level of redundancy desired/affordable, e.g. N, N+1, 2N or 2(N+1) 5. Decide upon the location of the UPS and battery equipment. It should be close to the IT equipment and main power feed to reduce cable losses. TIA 942 recommends that UPS systems larger than 100 kva should be located in their own separate room 6. Decide upon size and location of the standby generator. It must be in a secure position, and in an area where noise and fumes will not be disruptive. It should also be close to the UPS system and switchgear to minimise cable losses 10.3 Electrical distribution around the computer room The electrical cabling, of adequate size to meet current and future design, must feed each equipment rack location and planned location. For Tier 2 and above there must be duplicate, redundant feeds to each location. Cabling may be fed into the top or bottom of racks, or both. Cabling run in the underfloor plenum space should be laid in the cold aisle at low level. Cabling entering through the bottom of the rack should be sealed with a brush strip to prevent entry of chilled air in an uncontrolled manner. Cable should be terminated and presented on IEC connectors, of appropriate size for the current and suitable for single or three phase connection as appropriate. Usual ratings are 16 or 32 amp. The higher power ratings of today s servers would suggest that two 32 amp feeds would be required, giving around 7 kw. Higher power rating would require a three-phase connection, providing around 22 kw Electrical distribution within the rack At its simplest the IEC connector is connected to a power strip which distributes the electricity to a number of standard sockets, which in the UK would be a standard 13 amp BS 1363 socket or an IEC socket. In America the plugs and sockets would be defined in the NEMA series.

23 The power distribution units can extend beyond simple distribution of the power and may offer; Sequential start up to lower inrush current Simple filtering Monitoring of current with an LED readout Automatic switching between feeds Network reporting and remote control ability through a TCP/IP connection Other systems around take in mains voltage and distribute 48 V D.C around the rack to remove the need for each item of equipment to have its own dedicated power supply Earthing, bonding and the Signal Reference Grid Earthing is required for three reasons; Safety from electrical hazards Reliable signal reference within the entire information technology installation Satisfactory electromagnetic performance of the entire information technology installation Correct earthing is required by law and described in various standards such as; BS 6701 Telecommunication cabling and equipment installations BS 7671 Requirements for electrical installations: IEE wiring regulations 16 th Edition Across Europe there is also; EN Application of equipotential bonding and earthing in buildings with information technology equipment EN Information technology Cabling installation Part 2 installation and planning practices inside buildings Across the world we have; IEC Electrical installations of buildings, various sections including; Part 5-548: Earthing arrangements and equipotential bonding for information technology equipment ISO 11801:2002 Information technology cabling for customer premises ANSI/TIA/EIA-J-STD-607 Commercial building grounding and bonding requirements for telecommunications And from the world of telecommunications there is; ETS Equipment engineering earthing and bonding of telecommunications equipment in telecommunication centres ITU-T K.27 Bonding configurations and earthing inside a telecommunications building ITU-T K.31 Bonding configurations and earthing of telecommunications installations inside a subscriber s building And a particular standard referenced by TIA 942 is; IEEE STD Powering and Grounding Sensitive Electronic Equipment

24 It is essential that all metallic elements are correctly earthed according to the most relevant standard above. This includes all equipment racks, cable containment and the metallic sheaths and armour of communications cables. Note that whereas earthing means, the connection of the exposed conductive parts of an installation to the main earthing terminal of that installation (BS 7671), bonding means, the electrical connection putting various exposed conductive parts and extraneous conductive parts at a substantially equal potential (EN ). Thus the connection for bonding must be capable of offering low enough impedance that a potential difference of not more then 1- volt rms can be maintained across the frequency range of interest. This leads on to the requirement for the Signal Reference Grid, SRG, or a System Reference Potential Plane, SRPP, as it is referred to in CENELEC standards. The SRG is there to offer a suitable low-impedance path to ground for high frequency interference signals that cannot be achieved by simple earthing. No standard mandates an SRG but everybody seems to recommend one, e.g.; TIA 942. Consideration should be given to installing a common bonding network (CBN) such as a signal reference structure as described in IEEE Standard 1100 for the bonding of telecommunications and computer equipment HP/Dell Site preparation guide. If the system is on raised flooring, use a 2-foot x 2-foot (61-cm x 61-cm) grounding grid EN A system reference potential plane (SRPP) conductive solid plane, as an ideal goal in potential equalising, is approached in practice by horizontal or vertical meshes. The mesh width thereof is adapted to the frequency range to be considered. Horizontal and vertical meshes may be interconnected to form a grid structure approximating to a Faraday cage SUN A signal reference grid should be designed for the computer room. This provides an equal potential plane of reference over a broad band of frequencies through the use of a network of low-impedance conductors installed throughout the facility The SRG should therefore be constructed on the floor below the IT equipment and be constructed of copper tapes approximately 50 mm wide. The dimensions of the grid have typically been 24 x 24 inches (610 x 610 mm), however this only effectively gives protection up to around 30 MHz. With gigabit Ethernet operating at up to 100 MHz this needs to be reduced to 200 mm to be effective whereas ten gigabit Ethernet, operating at 500 MHz, would ideally need an almost complete surface. When using 50-mm copper tape a grid spacing of about 100 mm is the practical limit. The SRG must be effectively bonded to the building steel and the main electrical and telecommunications grounding busbar, and all items on top or crossing the SRG must be connected to it.

25 12.0 Fire detection, alarm and suppression A fire design policy operates over a number of areas, all of which are related. Design the building with materials and designs that minimise fire risk Operate the building with practices that reduce fire risk Detect fire and smoke with suitable apparatus Sound an alarm if fire is detected to evacuate a building, summon the fire brigade and set off fire extinguishants Suppress the fire with automatic fire extinguishants The principle fire safety legislation in the UK is the Fire Precautions (Workplace) Regulations 1997/1999. This is obviously a major subject and one subject to laws and building regulations. TIA 942 Telecommunications Infrastructure Standard for Data Centers, April 2005, requires the following for a data centre; 12.1 Detection The recommended smoke detection system for critical data centers where high airflow is present is one that will provide early warning via continuous air sampling and particle counting and have a range up to that of conventional smoke detectors. the system has four levels of alarm that range from detecting smoke in the invisible range up to that detected by conventional detectors. The system at its highest alarm level would be the means to activate the pre-action system valve. One system would be at the ceiling level of the computer room, entrance facilities, electrical rooms, and mechanical rooms as well as at the intake to the computer room air-handling units. A second system would cover the area under the access floor in the computer room, entrance facilities, electrical rooms, and mechanical rooms. A third system is also recommended for the operations center and printer room to provide a consistent level of detection for these areas. A fire alarm system consists of 1. Detectors a) Smoke, heat, flame etc 2. Manual call points 3. Alarms a) Bells, sirens, voice recording, visual etc 4. Approved fire survival cable to link it all together 5. A central control box to link it all together and to connect to other services

26 In the UK fire detection is governed by; BS :2002 Fire detection and fire alarm systems for buildings. Code of practice for system design, installation, commissioning and maintenance And specifically for computer rooms and other electronic installations; BS 6266:2002 Code of practice for fire protection for electronic equipment installations Fire alarm and detection components are generally covered by; BS EN 54 Fire detection and fire alarm systems Typical fire detection and alarm loop The cables must be fire survivable as described in BS5839-1: 2002 Clause (26.2 d & e) which invokes, amongst others; BS : 2002 Mineral insulated cables and their terminations with a rated voltage not exceeding 750V BS 6387:1994 Performance requirements for cables required to maintain circuit integrity under fire conditions Fire detectors come in a number of guises such as ionising smoke detectors, optical detectors, flame and heat detectors etc, but the smoke detection system recommended for computer rooms is a highly sensitive system that gives very early warning and is known as Aspirating Smoke Detection, ASD. BS recommends, A dedicated smoke detection system interfaced with the main building system, and an aspirating smoke detection to monitor return air flows, for critical equipment areas such as centralised computer facilities. BS 5839 describes many different types of smoke and flame detectors and most importantly, where they should be sited. The siting of aspirating smoke detector inlets follows exactly the same rules as more conventional smoke detectors.

27 ASD is a high sensitivity, aspirating type laser-based optical smoke detection system that continually draws air within the protected area through a network of pipes where it is passed through a calibrated detection chamber. It is capable of providing very early warning of fire conditions thereby providing invaluable time to investigate and respond to a potential threat of fire. ASD is very often referred to by a brand name, VESDA, Very Early Smoke Detection Apparatus. VESDA is a trademark of Vision Products Pty Ltd of Australia. A VESDA system can detect a fire within 70 seconds and activate a fire suppression response in under two minutes. A sprinkler system would take four to six minutes under the same circumstances. Conclusion Various standards, such as TIA 942 and BS 6266 recommend aspirating smoke detectors for data processing applications such as data centres because of their quick reaction time. The detection system should be able to give various levels of alarm and needs to be optimised for the different areas encountered within a data centre. A data centre should have two levels of fire detection and suppression. An aspirating smoke detector linked to a gaseous fire suppression system as the first response and a pre-action sprinkler system as the last resort Fire suppression According to the SUN Data Centre Guide the ideal system would incorporate both a gas system and a pre-action water sprinkler system in the ambient space. According to the Fire Safety Advice Centre, ( the following methods are considered for computer rooms; Automatic sprinklers Detection and pre-action sprinkler Detection and water sprays (mist) Detection and total flood CO 2 Foam High sensitivity smoke detection aspirating systems Detection and dry powder Detection and manual intervention Detection and inert gas Detection and fine particulate aerosol Detection and halocarbon gas Telecom Computer Control Rooms yes yes no yes Yes (under floor) yes no yes yes no yes

28 12.3 Gas suppression EC Regulation 2037/2000 prohibits the sale and use of halons, including material that has been recovered or recycled, from 31st December Furthermore, with the exception of equipment deemed critical under the Regulation, all fire-fighting equipment in the EU containing halons must be decommissioned before 31st December The halon replacement market for clean agent gaseous suppression systems splits into inert gasses and halocarbon gasses. Inert Gases Inert gas agents are electrically non-conductive clean fire suppressants that are used in design concentrations of 35-50% by volume to reduce ambient oxygen concentration to between 14 and 10%. Oxygen concentrations below 14% will not support the combustion of most fuels (and human exposure must be limited). Halocarbon Gas Systems A number of fire extinguishing halocarbon gases with zero ozone depletion potential (ODP) have been developed. These include both HFCs (hydrofluorocarbons) and PFCs (perfluorocarbons). The DETR has published a document to give guidance on halon replacements, Advice on Alternatives and Guidelines for Users of Fire Fighting and Explosion Protection Systems. Although products are not officially approved or recognised by this route. Inert gasses Trade Name Designation Gas Blend NN100 IG-100 Nitrogen Argotec IG-01 Argon Argonite IG-55 Nitrogen/Argon mixture Inergen IG-541 Nitrogen/Argon/Carbon dioxide mixture Halocarbon gasses Trade Name Designation Chemical Formula Chemical Name FE-13 HFC 23 CHF 3 Trifluoromethane FE-125 HFC 125 CF3 CHF2 Pentafluoroethane FM-200 HFC 227ea CF3 CHFCF3 Heptafluoropropane FE-36 HFC 236fa CF3 CH2 CF3 Hexafluoropropane CEA-308 PFC C3 F8 Perfluoropropane CEA-410 PFC C4 F10 Perfluorobutane In general, inert gas systems appear to take up more space and be slightly more expensive than the halocarbon alternatives.

29 Manual means of fire suppression system discharge should also be installed. These should take the form of manual pull stations at strategic points in the room. In areas where gas suppression systems are used, there is normally also a means of manual abort for the suppression system. See also; BS 6266:2002 BS ISO :2000 Code of practice for fire protection for electronic equipment installations Gaseous fire-extinguishing systems. Physical properties and system design. General requirements 12.4 Pre-action sprinkler systems The gaseous fire suppression is seen as the first line of defence. After that comes the sprinkler system. This must be of the pre-action type. This means that the pipes are normally dry, and cannot therefore drip onto the equipment. The smoke detection system can set the first phase of the sprinkler system by letting water enter into the piping but it still need the additional heat of the fire to set off the sprinklers themselves. This is sometimes known as a double-knock system Portable fire extinguishers Portable fire extinguishers should also be placed strategically throughout the room. These should be unobstructed, and should be clearly marked. Labels should be visible above the tall computer equipment from across the room. Appropriate tile lifters should be located at each extinguisher station to allow access to the subfloor void for inspection, or to address a fire. A torch should also be located with the tile lifter. Conclusion The fire safety plan is a multilayered approach that requires a coordinated plan for Designing for low flammability and fire risk Operating with low risk Emergency exits Emergency lighting Emergency exit signage Fire detection, appropriate to the area covered Fire alarm Multi-level automatic fire suppression Manual fire alarm and portable fire extinguishers Staff training and fire drills in place Maintenance plan for all equipment involved

30 13.0 Communications cabling and containment Cabling is required to connect all the coomunications and control devices within the data centre and the world beyond. Correct choice and installation of the cabling is essential to guarantee error-free transmission of data. The communications protocols within the data centre nowadys revolve mostly around Ethernet and Fibre Channel. Communications speeds of at least 1 Gb/s should be designed for and now ten gigabit speeds need to be considered. Design issues revolve around the selection of copper and/or optical fibre, grades of copper and fibre to be used, screened or unscreened copper cabling and levels of redundancy and resilience to be built in to the cabling model Spaces and hierarchy The TIA 942 model shows the Spaces that need to be accommodated and the cabling interconnection hierarchy between and within them. EN (Draft) is very similar but uses slightly different terminology. TIA-942 EN Cross connect in the entrance room ENI (external network interface) Main cross-connect in the MDA (main distribution area) MD (main distributor) Horizontal cross-connect in the MDA or HDA (horizontal distribution area) ZD (zone distributor) Zone outlet or consolidation point in the ZDA (zone distribution area) LDP (local distribution point) Outlet in the EDA (equipment distribution area) EO (equipment outlet) Horizontal cabling Zone distribution cabling Backbone cabling (between MDA and HDAs) Main distribution cabling Backbone cabling (from MDA to entrance room or from MDA to telecom room) Network access cabling Telecommunications room Distributor

31 Alignment of terminology 13.2 Cable selection TIA 942 recognises 100-ohm twisted-pair cable (ANSI/TIA/EIA-568-B.2), Category 6 recommended UTP or ScTP (ANSI/TIA/EIA-568-B.2-1); Multimode optical fibre cable, either 62.5/125 micron or 50/125 micron (ANSI/TIA/EIA- 568-B.3), 50/125 micron 850 nm laser optimized multimode fibre is recommended (ANSI/TIA-568- B.3-1); Single-mode optical fibre cable (ANSI/TIA/EIA-568-B.3). Coaxial media are 75-ohm (734 and 735 type) Telcordia Technologies GR-139-CORE) and coaxial connector ANSI T1.404 EN recognises any of the cabling media addressed in EN 50173, e.g. Cat 5, Cat 6, Cat 7 etc, but Class E/Cat 6 is recommended for the main distribution and zone distribution cabling. It would seem that within the Data Centre/Computer Room, a cable less than Category 6 performance should not be used. Note that the American standards do not recognise Category 7/Class F. None of the standards discuss 10GBASE-T or the forthcoming Augmented Category 6 standard as this has not yet been published, or even finalised at the time of writing, but is expected later in Products claiming Cat6A performance are already on sale but whether unscreened (UTP) products can meet the alien crosstalk requirements and EMC regulations when operating at the 500 MHz frequencies invoked by 10GBASE-T is still a matter of debate within the industry. Certainly a screened Cat 6 or Cat6A system is going to cope much better with the EMC and Alien crosstalk issues.

32 Cable selection issues Copper cable o At least Category 6. Consider Cat6A or Cat 7 for higher bandwidth performance. o Consider unscreened or screened. Unscreened is cheapest and seems to cope with gigabit Ethernet speeds. Consider screened for severe EMC problems or upgrade to 10GBASE-T operation o Consider the fire performance of the cable. Unlike the USA there are no rules requiring very low flammability cabling in Europe. As a minimum request zero halogen/low flammability cable to; IEC C o The best performing cable in a fire situation is the plenum style meeting NFPA 262: Standard Method of Test for Flame Travel and Smoke of Wires and Cables for use in Air-Handling Spaces:2002 Or its higher performing companion known as Limited Combustible Plenum cable Optical fibre o ISO and EN now classify optical fibres as OM1, OM2, OM3 and OS1. OM means multimode fibre and OS means singlemode fibre o OM3 is a very high bandwidth fibre optimised for ten gigabit operation and is the obvious choice for new data centre installations o Singlemode fibre, OS1, is not needed within the data centre but it may be needed to connect to the outside world of telecommunications and should be put in place to allow for direct high speed communications from routers and SAN devices o Optical connectors must also be specified. There are many to choose from and are Standards recognised. The market leader for high speed data communications is now the LC connector 13.3 Preconnectorised cabling Cabling is traditionally installed, as cable, which is then terminated on-site in patch panels, outlets and other connectors. There is a big time advantage to be gained by terminating the cables off-site and installing the ready-made assemblies into the data centre. Preconnectorised cabling is most popular when time on site is at an absolute premium. This may be in a new build, such as a data centre, where time scales are critical and many different trades are vying for the right to work on any particular bit of floor space at any time. Other time-critical areas are live sites that need additional cabling but where the costs and implications of downtime are horrendous, such as a trading floor or call centre. Such a facility may want to have all its cabling upgraded or extended in one overnight operation. Busy city centre facilities will also suffer from a lack of parking and loading bays, on-site storage restrictions and security worries associated with cable installers needing weeks of access time to the site. Preconnectorised cabling should reduce time needed on site by around 75% compared to traditional installation.

33 Quality of the terminations should also be improved by allowing sophisticated Category 6 copper and optical fibre terminations to be made in a clean factory environment by skilled people. Each cable assembly can be 100% checked in the factory and whatever is sent to site is known to be of the highest quality. There are no particular disadvantages to preconnectorised cabling, and it should be costneutral to the end user, however accurate surveys need to be carried out to ensure correct cable lengths are made up and installed. Connectix Express preconnectorised copper cabling A0101 A0102 A0103 A0104 B0105 B0106 B0107 B0108 C0109 C0110 C0111 C0112 Cable A Panel 01 Cable B Panel 01 Cable C Panel 01 Panel to panel link Panel to desk link Desk Pod Cable A Panel 02 Cab le C Desk 01 CD014 CD013 CD012 CD011 Panel to floor link A0201 A0202 A0203 A0204 Cab le B F lo or 01 BF014 BF013 BF012 BF Floor Box 13.4 Cable containment The cable containment must protect the cables and also the bend radius requirements of the cables. Containment may take the form of basket, trays, conduit, trunking etc. If it is metallic, then all of the containment must be correctly earthed. All cabling, patch panels, earthing and containment system must be adequately labelled and marked and records kept. This aspect of cabling is described in the following; ANSI/TIA/EIA-606-A Administration Standard for the Telecommunications Infrastructure of Commercial Buildings EN Information technology cabling installation Part 1:Specification and quality assurance ISO/IEC : Information Technology Implementation and operation of customer premises cabling Part 1:Administration TIA 942 Telecommunications Infrastructure Standard for Data Centers and

34 BS 6701:2004 Telecommunications equipment and telecommunications cabling Specification for installation, operation and maintenance that all cables and components be suitably marked to uniquely identify them. The durability of all labelling must also be suitable for the rigours of the environment in which they are placed and the expected timescale of the installation, usually in excess of ten years. The cables need to be contained and protected and separated from other services. For example EN requires a separation of at least 200 mm between unscreened data and unscreened power cables, although distances can come down if any of the cables are screened. BS6701 requires a 50 mm separation at all times between cables unless there is a non-metallic divider separating the two groups. In the UK, BS6701 and EN requirements need to be overlaid and the worst-case separation distances used for a correct installation. BS6701 and EN overlaid Type of Installation Separation Distance Without a With a non- Aluminium Steel divider metallic divider divider divider Unscreened power cable 200 mm 200 mm 100 mm 50 mm and unscreened IT cable Unscreened power cable 50 mm 50 mm 50 mm 50 mm and screened IT cable Screened power cable 50 mm 30 mm 50 mm 50 mm and unscreened IT cable Screened power cable 50 mm 0 mm 50 mm 50 mm and screened IT cable 13.5 Cabling standards summary At present EN defines the cabling design. Soon the more specific EN standard will more precisely define data centre cabling requirements. On a wider basis, ISO11801 and ANSI/TIA/EIA-568-B also define cable system design. TIA 942 defines the cabling hierarchy for data centres and states the permissible range of cables. TIA 942 only invokes other American standards such as ANSI/TIA/EIA-568-B. EN parts 1,2 and 3 describe installation and quality assurance techniques. EN describes the equipotential bonding system for information technology installations. EN describes the testing methodology to prove compliance of the installed cabling.

35 13.6 Modular Designs Data Centre users rarely know exactly what the format of the I.T. equipment will be when the Data Centre goes live and certainly don t know what we will be expected of it next year. For this reason many people like to design a generic centre based on flexible modular units like the Capitoline Cluster Concept ( In the example shown below a cluster consists of five server racks with half of one rack dedicated to cabling interconnection. Each rack takes 60 Cat 6 cables and one OM3 8-fibre cable back to the Main Distribution Frame. One cluster is dedicated to Wide Area Networking/Router/Telecoms applications. It too has 60 Cat 6 cables but more optical fibre and also a singlemode link back to the MDF to allow for direct high-speed connection into the outside world. The Storage Area Network, SAN, cluster is identically cabled. The MDF mirrors the server, WAN and SAN zones and also has a dedicated area to connect to the Telecoms Room and ENI. For additional resilience each Server cluster has Cat 6 cables wired directly to the WAN and SAN clusters. Modular designs and cluster concepts are bound to be more popular as the rate of change in Data Centres increases. The cluster concept incorporates the air conditioning as well by rating each rack with a minimum 2 kw load dissipation and a planned upgrade path up to 20 kw per rack.

36 14.0 Security, Access Control and CCTV TIA 942 requires that the Data Centre be secure. Security Access Tier 1 Tier 2 Tier 3 Tier 4 Control/monitoring at: Generators Industrial grade Intrusion Intrusion Intrusion lock detection detection detection UPS. Telephone Industrial grade Intrusion Card access Card access & MEP rooms lock detection Fibre vaults Industrial grade Intrusion Intrusion Card access lock detection detection Emergency Exit Doors Industrial grade monitor Delay egress Delay egress lock Accessible exterior Off site Intrusion Intrusion Intrusion windows monitoring detection detection detection Security operations centre N/a N/a Card access Card access Doors into computer Industrial grade Intrusion Card or Card or rooms lock detection biometric access biometric access Perimeter building doors Off site Intrusion Card access Card access monitoring detection Doors from lobby Industrial grade Card access Single person Single person to floors lock interlock interlock CCTV requirements CCTV monitoring Tier 1 Tier 2 Tier 3 Tier 4 Building perimeter No requirement No requirement Yes Yes and parking Generators N/a N/a Yes Yes Access controlled doors No requirement yes Yes Yes Computer room floors No requirement No requirement Yes Yes UPS, telephone and No requirement No requirement Yes Yes MEP rooms CCTV Tier 1 Tier 2 Tier 3 Tier 4 CCTV recording on No requirement No requirement Yes; digital Yes; digital all cameras Recording rate, N/a N/a 20 f/s min 20 f/s min frames per second

37 15.0 Building Management Systems Building Management Systems, or BMS, can cover a range of technologies that controls and optimises space heating, air conditioning, hot water service and lighting in buildings. TIA 942 makes the following statement; A Building Management System (BMS) should monitor all mechanical, electrical, and other facilities equipment and systems. The system should be capable of local and remote monitoring and operation. Individual systems should remain in operation upon failure of the central Building Management System (BMS) or head end. Consideration should be given to systems capable of controlling (not just monitoring) building systems as well as historical trending. 24-hour monitoring of the Building Management System (BMS) should be provided by facilities personnel, security personnel, paging systems, or a combination of these. Emergency plans should be developed to enable quick response to alarm conditions. We can consider a Data Centre as being in three layers for the BMS requirement; Incorporation into a larger and pre-existing site BMS A BMS dedicated to the Data Centre facility Rack level monitoring and control With IP based networks more and more of these systems come together with one common cabling system. The exception is the fire detection loop cabling which must be dedicated and fire survival grade. Many of the control systems rely on automation protocols such as LONWorks and BACNET to communicate and control the end equipment but the higher levels of communication between controllers is now reliant upon TCP/IP and Ethernet. CCTV Access Control & Monitoring Fire alarms BMS -HVAC -Lighting Environmental monitoring Building based Room based Rack based Common IP cabling Dedicated cabling Local alarm/control Remote alarm/control The environmental monitoring parameters are; Temperature Access Smoke Vibration Water Air flow Humidity Particles in the incoming air flow

38 16.0 Project Management and other issues So far in this document we have considered the various Tiering levels defined in TIA 942 and from the Uptime research institute. A data centre does not need to be on the same Tier for every facility. It is quite acceptable for the installation to be Tier 2 for air conditioning and Tier 4 for power supply for example. It all depends upon what the customer wants and can afford. Tier 1 Tier 2 Tier 3 Tier 4 Site availability % % % % Downtime (hours/yr) Operations Center Not required Not required Required Required Redundancy for power, N N+1 N+1 2(N+1) cooling Gaseous fire suppression Not required Not required Approved Approved system system system Redundant backbone Not required Not required Required Required pathways We can take further definitions from TIA 942 N - Base requirement System meets base requirements and has no redundancy N+1 redundancy N+1 redundancy provides one additional unit, module, path, or system in addition to the minimum required to satisfy the base requirement. The failure or maintenance of any single unit, module, or path will not disrupt operations 2N redundancy 2N redundancy provides two complete units, modules, paths, or systems for every one required for a base system. Failure or maintenance of one entire unit, module, path, or system will not disrupt operations 2(N+1) redundancy 2(N+1) redundancy provides two complete (N+1) units, modules, paths, or systems. Even in the event of failure or maintenance of one unit, module, path, or system, some redundancy will be not be disrupted Tier I Data Center: Basic A Tier I data centre is susceptible to disruptions from both planned and unplanned activity. It has computer power distribution and cooling, but it may or may not have a raised floor, a UPS, or an engine generator. If it does have UPS or generators, they are single-module systems and have many single points of failure. The infrastructure should be completely shut down on an annual basis to perform preventive maintenance and repair work. Urgent situations may require more frequent shutdowns. Operation errors or spontaneous failures of site infrastructure components will cause a data center disruption.

39 Tier II Data Centre: Redundant Components Tier II facilities with redundant components are slightly less susceptible to disruptions from both planned and unplanned activity than a basic data centre. They have a raised floor, UPS, and engine generators, but their capacity design is Need plus One (N+1), which has a single threaded distribution path throughout. Maintenance of the critical power path and other parts of the site infrastructure will require a processing shutdown. Tier III Data Centre: Concurrently Maintainable Tier III level capability allows for any planned site infrastructure activity without disrupting the computer hardware operation in any way. Planned activities include preventive and programmable maintenance, repair and replacement of components, addition or removal of capacity components, testing of components and systems, and more. For large sites using chilled water, this means two independent sets of pipes. Sufficient capacity and distribution must be available to simultaneously carry the load on one path while performing maintenance or testing on the other path. Unplanned activities such as errors in operation or spontaneous failures of facility infrastructure components will still cause a data centre disruption. Tier III sites are often designed to be upgraded to Tier IV when the client s business case justifies the cost of additional protection. Tier IV Data Centre: Fault Tolerant Tier IV provides site infrastructure capacity and capability to permit any planned activity without disruption to the critical load. Fault-tolerant functionality also provides the ability of the site infrastructure to sustain at least one worst-case unplanned failure or event with no critical load impact. This requires simultaneously active distribution paths, typically in a System+System configuration. Electrically, this means two separate UPS systems in which each system has N+1Tier IV Data Centre: Fault Tolerant redundancy. Because of fire and electrical safety codes, there will still be downtime exposure due to fire alarms or people initiating an Emergency Power Off (EPO). Tier IV requires all computer hardware to have dual power inputs as defined by the Institute s Fault-Tolerant Power Compliance Specification. Safety Audit The installation must be audited for safety both at design stage, project handover and routine inspection. The requirements of the fire safety programme are already outlined in section 12. Additional safety audit points are; Raised Floors (especially lifting tiles or tripping or falling) Lifting Hazards Electrical Shock Hazards Static Discharge Hazards Cutting Hazards Pinching / Amputation Hazards Fire Hazards Accidental Triggering a gaseous Fire Retardant System Dump Accidental Unplugging Network Cables or Power From Servers Infra red laser hazard Excessive noise

40 For the last point it is worth noting that sound levels at work in Europe were reduced in. The EC Noise at Work Directive 2003/10/EC was made on 6th February 2003 and repeals and replaces 86/188/EC as from (mainly) 15th. Where is the money likely to go in a Data centre? An American example Example small data centre, courtesy Future-Tech,

41 A Appendix I Some Standards referenced in this document ANSI/TIA/EIA-568-B Commercial Building Telecommunications Cabling Standard ANSI/TIA/EIA-606-A Administration Standard for the Telecommunications Infrastructure of Commercial Buildings ANSI/TIA/EIA-J-STD-607 Commercial building grounding and bonding requirements for telecommunications ASHRAE Thermal Guidelines for Data Processing Environments BS EN 54 Fire detection and fire alarm systems BS :2000 Safety signs, including fire safety signs. Code of practice for escape route signing BS , The Code of Practice For Emergency lighting BS :2002 Fire detection and fire alarm systems for buildings. Code of practice for system design, installation, commissioning and maintenance BS : Mineral insulated cables and their terminations with a rated voltage not exceeding 750V BS 6387: Performance requirements for cables required to maintain circuit integrity under fire conditions BS 6266:2002 Code of practice for fire protection for electronic equipment installations BS 6701 Telecommunication cabling and equipment installations BS 7671 Requirements for electrical installations: IEE wiring regulations 16 th Edition BS ISO P1: 2000(E), Gaseous fire-extinguishing systems. Physical properties and system design. General requirements, BS 8300:2001 Design of buildings and their approaches to meet the needs of disabled people Code of practice, and Building Regulations 2000 Part M Access and facilities for disabled people DETR Advice on Alternatives and Guidelines for Users of Fire Fighting and Explosion Protection Systems EN Application of equipotential bonding and earthing in buildings with information technology equipment EN Information technology - Generic cabling systems -- Part 1: General requirements and office areas EN Information technology cabling installation Part 1:Specification and quality assurance EN Information technology Cabling installation Part 2 installation and planning practices inside buildings EN Information technology - Cabling installation - Testing of installed cabling EN Raised access floors ETS Equipment engineering earthing and bonding of telecommunications equipment in telecommunication centres Federal Standard 209E, Airborne Particulate Cleanliness Classes in Cleanrooms and Clean Zones, Class 100,000 IEC Plugs, socket-outlets and couplers for industrial purposes - Part 1: General requirements IEC Appliance couplers for household and similar general purposes - Part 1: General requirements

42 IEC C Tests on electric cables under fire conditions - Part 3-10: Test for vertical flame spread of vertically-mounted bunched wires or cables IEC Electrical installations of buildings, various sections including; Part 5-548: Earthing arrangements and equipotential bonding for information technology equipment IEEE STD Powering and Grounding Sensitive Electronic Equipment NFPA 262: Standard Method of Test for Flame Travel and Smoke of Wires and Cables for use in Air-Handling Spaces:2002 ISO/IEC : Information Technology Implementation and operation of customer premises cabling Part 1:Administration ISO 11801:2002 Information technology cabling for customer premises ITU-T K.27 Bonding configurations and earthing inside a telecommunications building ITU-T K.31 Bonding configurations and earthing of telecommunications installations inside a subscriber s building The Property Services Agency (PSA) Method of Building Performance Specification 'Platform Floors (Raised Access Floors)', MOB PF2 PS TIA 942 Telecommunications Infrastructure Standard for Data Centers, April VDI 2054, Air conditioning systems for computer areas

Introduction to Data Centre Design

Introduction to Data Centre Design Introduction to Data Centre Design Barry Elliott BSc RCDD MBA CEng Earthing, grounding and bonding November 09 Earthing what s the point Safety from electrical hazards Reliable signal reference within

More information

This 5 days training Course focuses on Best Practice Data Centre Design, Operation and Management leading to BICSI credits.

This 5 days training Course focuses on Best Practice Data Centre Design, Operation and Management leading to BICSI credits. This 5 days training Course focuses on Best Practice Data Centre Design, Operation and Management leading to BICSI credits. DCD, DCOM, DCE DESIGN IMPLEMENTATION BEST PRACTICE OPERATION & MANAGEMENT DATA

More information

Power and Cooling for Ultra-High Density Racks and Blade Servers

Power and Cooling for Ultra-High Density Racks and Blade Servers Power and Cooling for Ultra-High Density Racks and Blade Servers White Paper #46 Introduction The Problem Average rack in a typical data center is under 2 kw Dense deployment of blade servers (10-20 kw

More information

Upgrading the PRS datacentres: space requirement. S. S. Mathur, GM-IT, CRIS

Upgrading the PRS datacentres: space requirement. S. S. Mathur, GM-IT, CRIS Upgrading the PRS datacentres: space requirement S. S. Mathur, GM-IT, CRIS PRS / UTS scenario: then and now Year 2000 PRS: 600 locations UTS: 0 locations Year 2008 PRS 1600 locations E-ticketing Passengers

More information

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings WHITE PAPER Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings By Lars Strong, P.E., Upsite Technologies, Inc. Kenneth G. Brill, Upsite Technologies, Inc. 505.798.0200

More information

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management.

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management. AIMS is Malaysia and South East Asia s leading carrier neutral data centre operator and managed services provider. We provide international class data storage and ancillary services, augmented by an unrivaled

More information

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management.

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management. AIMS is Malaysia and South East Asia s leading carrier neutral data centre operator and managed services provider. We provide international class data storage and ancillary services, augmented by an unrivaled

More information

Element D Services Heating, Ventilating, and Air Conditioning

Element D Services Heating, Ventilating, and Air Conditioning PART 1 - GENERAL 1.01 OVERVIEW A. This section supplements Design Guideline Element D3041 on air handling distribution with specific criteria for projects involving design of a Data Center spaces B. Refer

More information

SPECIAL APPLICATIONS

SPECIAL APPLICATIONS Page 1 SPECIAL APPLICATIONS The protection of microelectronics clean rooms using aspirating smoke detection systems requires special consideration. The physical configuration of the room(s) and the direction

More information

Managing Data Centre Heat Issues

Managing Data Centre Heat Issues Managing Data Centre Heat Issues Victor Banuelos Field Applications Engineer Chatsworth Products, Inc. 2010 Managing Data Centre Heat Issues Thermal trends in the data centre Hot Aisle / Cold Aisle design

More information

TIA-942 Data Centre Standards Overview WHITE PAPER

TIA-942 Data Centre Standards Overview WHITE PAPER TIA-942 Data Centre Standards Overview WHITE PAPER TIA-942 Data Centre Standards Overview For the past 20 years, cabling standards have been the cornerstone of ensuring proper design, installation, and

More information

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management.

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management. AIMS is Malaysia and South East Asia s leading carrier neutral data centre operator and managed services provider. We provide international class data storage and ancillary services, augmented by an unrivaled

More information

Cooling Audit for Identifying Potential Cooling Problems in Data Centers

Cooling Audit for Identifying Potential Cooling Problems in Data Centers Cooling Audit for Identifying Potential Cooling Problems in Data Centers By Kevin Dunlap White Paper #40 Revision 2 Executive Summary The compaction of information technology equipment and simultaneous

More information

Choosing Close-Coupled IT Cooling Solutions

Choosing Close-Coupled IT Cooling Solutions W H I T E P A P E R Choosing Close-Coupled IT Cooling Solutions Smart Strategies for Small to Mid-Size Data Centers Executive Summary As high-density IT equipment becomes the new normal, the amount of

More information

Cabinets 101: Configuring A Network Or Server Cabinet

Cabinets 101: Configuring A Network Or Server Cabinet Cabinets 101: Configuring A Network Or Server Cabinet North, South and Central America White Paper May 2012 www.commscope.com Contents Background Information 3 What is a network or server cabinet? 3 What

More information

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

Unified Physical Infrastructure (UPI) Strategies for Thermal Management Unified Physical Infrastructure (UPI) Strategies for Thermal Management The Importance of Air Sealing Grommets to Improving Smart www.panduit.com WP-04 August 2008 Introduction One of the core issues affecting

More information

Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers

Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers CABINETS: ENCLOSED THERMAL MOUNTING MANAGEMENT SYSTEMS WHITE PAPER Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers 800-834-4969 [email protected] www.chatsworth.com All products

More information

How to Meet 24 by Forever Cooling Demands of your Data Center

How to Meet 24 by Forever Cooling Demands of your Data Center W h i t e P a p e r How to Meet 24 by Forever Cooling Demands of your Data Center Three critical aspects that are important to the operation of computer facilities are matching IT expectations with the

More information

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc. High Density Data Centers Fraught with Peril Richard A. Greco, Principal EYP Mission Critical Facilities, Inc. Microprocessors Trends Reprinted with the permission of The Uptime Institute from a white

More information

OPERATOR - FACILITY ADDRESS - DATE

OPERATOR - FACILITY ADDRESS - DATE . Facility Criteria (RFP).0 Location Characteristics Criteria Requirements 1 Ownership and Management: Please identify the owner(s) and managers of the property and indicate if there is on-site management.

More information

APC APPLICATION NOTE #112

APC APPLICATION NOTE #112 #112 Best Practices for Deploying the InfraStruXure InRow SC By David Roden Abstract The InfraStruXure InRow SC (ACSC100 and ACSC101) is a self-contained air conditioner for server rooms and wiring closets.

More information

LEVEL 3 DATA CENTER ASSESSMENT

LEVEL 3 DATA CENTER ASSESSMENT Nationwide Services Corporate Headquarters 410 Forest Street Marlborough, MA 01752 USA Tel: 800-342-5332 Fax: 508-303-0579 www.eecnet.com LEVEL 3 DATA CENTER ASSESSMENT Submitted by: Electronic Environments

More information

GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY. Public Sector ICT Special Working Group

GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY. Public Sector ICT Special Working Group GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY Public Sector ICT Special Working Group SERVER ROOM ENERGY EFFICIENCY This guide is one of a suite of documents that aims to provide guidance on ICT energy efficiency.

More information

Rittal Liquid Cooling Series

Rittal Liquid Cooling Series Rittal Liquid Cooling Series by Herb Villa White Paper 04 Copyright 2006 All rights reserved. Rittal GmbH & Co. KG Auf dem Stützelberg D-35745 Herborn Phone +49(0)2772 / 505-0 Fax +49(0)2772/505-2319 www.rittal.de

More information

Fire. The Fire Installers Mate. A guide to fire alarm systems design

Fire. The Fire Installers Mate. A guide to fire alarm systems design Fire The Fire Installers Mate A guide to fire alarm systems design BS 5839 Part 1:2002 A guide to BS 5839 Part 1:2002 Disclaimer This booklet is not intended to be a comprehensive guide to all aspects

More information

CANNON T4 MODULAR DATA CENTRE HALLS

CANNON T4 MODULAR DATA CENTRE HALLS CANNON T4 MODULAR DATA CENTRE HALLS Halve up-front investment, use a Modular fit out approach Highly efficient, Low PUE, Scalable system with reduced Capex and Opex Turnkey solutions available complete,

More information

BRUNS-PAK Presents MARK S. EVANKO, Principal

BRUNS-PAK Presents MARK S. EVANKO, Principal BRUNS-PAK Presents MARK S. EVANKO, Principal Data Centers of the Future and the Impact of High Density Computing on Facility Infrastructures - Trends, Air-Flow, Green/LEED, Cost, and Schedule Considerations

More information

Screened cable and data cable segregation

Screened cable and data cable segregation December 2007 Screened cable and data cable segregation Can an armoured power cable be considered a screened cable and how does this relate to power and data cable segregation? The short answer is yes;

More information

Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED.

Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED. Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED. Presentation Goals & Outline Power Density Where we have been- where we are now - where we are going Limitations of Air Cooling

More information

Greening Commercial Data Centres

Greening Commercial Data Centres Greening Commercial Data Centres Fresh air cooling giving a PUE of 1.2 in a colocation environment Greater efficiency and greater resilience Adjustable Overhead Supply allows variation in rack cooling

More information

Reducing Data Center Energy Consumption

Reducing Data Center Energy Consumption Reducing Data Center Energy Consumption By John Judge, Member ASHRAE; Jack Pouchet, Anand Ekbote, and Sachin Dixit Rising data center energy consumption and increasing energy costs have combined to elevate

More information

Best Practices for Wire-free Environmental Monitoring in the Data Center

Best Practices for Wire-free Environmental Monitoring in the Data Center White Paper 11800 Ridge Parkway Broomfiled, CO 80021 1-800-638-2638 http://www.42u.com [email protected] Best Practices for Wire-free Environmental Monitoring in the Data Center Introduction Monitoring for

More information

The following are general terms that we have found being used by tenants, landlords, IT Staff and consultants when discussing facility space.

The following are general terms that we have found being used by tenants, landlords, IT Staff and consultants when discussing facility space. The following are general terms that we have found being used by tenants, landlords, IT Staff and consultants when discussing facility space. Terminology: Telco: Dmarc: NOC: SAN: GENSET: Switch: Blade

More information

How To Run A Data Center Efficiently

How To Run A Data Center Efficiently A White Paper from the Experts in Business-Critical Continuity TM Data Center Cooling Assessments What They Can Do for You Executive Summary Managing data centers and IT facilities is becoming increasingly

More information

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America 7 Best Practices for Increasing Efficiency, Availability and Capacity XXXX XXXXXXXX Liebert North America Emerson Network Power: The global leader in enabling Business-Critical Continuity Automatic Transfer

More information

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers Deploying a Vertical Exhaust System www.panduit.com WP-09 September 2009 Introduction Business management applications and rich

More information

- White Paper - Data Centre Cooling. Best Practice

- White Paper - Data Centre Cooling. Best Practice - White Paper - Data Centre Cooling Best Practice Release 2, April 2008 Contents INTRODUCTION... 3 1. AIR FLOW LEAKAGE... 3 2. PERFORATED TILES: NUMBER AND OPENING FACTOR... 4 3. PERFORATED TILES: WITH

More information

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009 Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009 Agenda Overview - Network Critical Physical Infrastructure Cooling issues in the Server Room

More information

Calculating Total Power Requirements for Data Centers

Calculating Total Power Requirements for Data Centers Calculating Total Power Requirements for Data Centers By Richard Sawyer White Paper #3 Executive Summary Part of data center planning and design is to align the power and cooling requirements of the IT

More information

IT infrastructure Efficiency-boosting solutions. nextlevel. for data centre

IT infrastructure Efficiency-boosting solutions. nextlevel. for data centre IT infrastructure Efficiency-boosting solutions nextlevel for data centre 2 Rittal IT infrastructure The whole is more than the sum of its parts. The same is true of Rittal The System. With this in mind,

More information

Cooling a hot issue? MINKELS COLD CORRIDOR TM SOLUTION

Cooling a hot issue? MINKELS COLD CORRIDOR TM SOLUTION Cooling a hot issue? MINKELS COLD CORRIDOR TM SOLUTION The heat load in data centers is increasing rapidly, the adaptation of a cooling strategy is often necessary. Many data centers and server rooms use

More information

Data Centre Testing and Commissioning

Data Centre Testing and Commissioning Data Centre Testing and Commissioning What is Testing and Commissioning? Commissioning provides a systematic and rigorous set of tests tailored to suit the specific design. It is a process designed to

More information

AIR-SITE GROUP. White Paper. Green Equipment Room Practices

AIR-SITE GROUP. White Paper. Green Equipment Room Practices AIR-SITE GROUP White Paper Green Equipment Room Practices www.air-site.com Common practices to build a green equipment room 1 Introduction Air-Site (www.air-site.com) is a leading international provider

More information

Data Center Rack Systems Key to Business-Critical Continuity. A White Paper from the Experts in Business-Critical Continuity TM

Data Center Rack Systems Key to Business-Critical Continuity. A White Paper from the Experts in Business-Critical Continuity TM Data Center Rack Systems Key to Business-Critical Continuity A White Paper from the Experts in Business-Critical Continuity TM Executive Summary At one time, data center rack enclosures and related equipment

More information

DataCenter 2020: first results for energy-optimization at existing data centers

DataCenter 2020: first results for energy-optimization at existing data centers DataCenter : first results for energy-optimization at existing data centers July Powered by WHITE PAPER: DataCenter DataCenter : first results for energy-optimization at existing data centers Introduction

More information

Driving Data Center Efficiency Through the Adoption of Best Practices

Driving Data Center Efficiency Through the Adoption of Best Practices Data Center Solutions 2008 Driving Data Center Efficiency Through the Adoption of Best Practices Data Center Solutions Driving Data Center Efficiency Through the Adoption of Best Practices Executive Summary

More information

Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER

Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER Lylette Macdonald, RCDD Legrand Ortronics BICSI Baltimore 2011 Agenda: Discuss passive thermal management at the Rack

More information

Access Server Rack Cabinet Compatibility Guide

Access Server Rack Cabinet Compatibility Guide Access Server Rack Cabinet Compatibility Guide A Guide to the Selection and Evaluation of Access Server Rack Cabinets for Compatibility and Use with Third Party Server Chassis Kalkenstraat 91-93 B-8800

More information

Blade Server & Data Room Cooling Specialists

Blade Server & Data Room Cooling Specialists SURVEY I DESIGN I MANUFACTURE I INSTALL I COMMISSION I SERVICE SERVERCOOL An Eaton-Williams Group Brand Blade Server & Data Room Cooling Specialists Manufactured in the UK SERVERCOOL Cooling IT Cooling

More information

SERVER ROOM CABINET INSTALLATION CONCEPTS

SERVER ROOM CABINET INSTALLATION CONCEPTS SERVER ROOM CABINET INSTALLATION CONCEPTS SERVER ROOM CABINET INSTALLATION CONCEPTS SERVER ROOM CABINET INSTALLATION CONCEPTS ZPAS 169 EXAMPLES OF SERVER ROOMS PROJECTS WITH ZPAS CABINETS Data Box in the

More information

IPSIP M series-m Modular Data Center

IPSIP M series-m Modular Data Center IPSIP M series-m Modular Data Center IPSIP MODULAR TECHNOLOGIES. 1 With the rapid development of cloud computing and mobile Internet businesses, the growth of IT density and energy consumption are bringing

More information

DATA CENTRES UNDERSTANDING THE ISSUES TECHNICAL ARTICLE

DATA CENTRES UNDERSTANDING THE ISSUES TECHNICAL ARTICLE DATA CENTRES UNDERSTANDING THE ISSUES TECHNICAL ARTICLE Molex Premise Networks EXECUTIVE SUMMARY The term data centre usually conjures up an image of a high-tech IT environment, about the size of a football

More information

Energy Efficiency Best Practice Guide Data Centre and IT Facilities

Energy Efficiency Best Practice Guide Data Centre and IT Facilities 2 Energy Efficiency Best Practice Guide Data Centre and IT Facilities Best Practice Guide Pumping Systems Contents Medium-sized data centres energy efficiency 3 1 Introduction 4 2 The business benefits

More information

Compaq Rack Options and Accessories. Monitors and Keyboards

Compaq Rack Options and Accessories. Monitors and Keyboards 1 Compaq Rack Options and Accessories Compaq offers a wide variety of rack options and accessories that help you to complete your 9000 and 10000 series racks. Monitors and Keyboards TFT5600RKM United States

More information

Introduction to Data Centres

Introduction to Data Centres Introduction to Data Centres Grant Sauls CCDA CDNIDS CDCSNDS CDCD DCCA JNCIA ER JNCIS E Project+ DCE CCTT (FIA) CDS Certified Data Center Design Specialist AGENDA 9am 5pm What and why we have Data Centre

More information

Ten Steps to Solving Cooling Problems Caused by High- Density Server Deployment

Ten Steps to Solving Cooling Problems Caused by High- Density Server Deployment Ten Steps to Solving Cooling Problems Caused by High- Density Server Deployment By Peter Hannaford White Paper #42 Revision 1 Executive Summary High-density servers present a significant cooling challenge.

More information

AVAILABLE TO DOWNLOAD ON THE APOLLO APP. Fire Alarm Systems Design. a guide to BS5839

AVAILABLE TO DOWNLOAD ON THE APOLLO APP. Fire Alarm Systems Design. a guide to BS5839 AVAILABLE TO DOWNLOAD ON THE APOLLO APP a guide to Fire Alarm Systems Design BS5839 Part1:2013 The Regulatory Reform (Fire Safety) Order (FSO) became law on 1 October 2006 Legally you must comply! What

More information

Data Centers and Mission Critical Facilities Operations Procedures

Data Centers and Mission Critical Facilities Operations Procedures Planning & Facilities Data Centers and Mission Critical Facilities Operations Procedures Attachment A (Referenced in UW Information Technology Data Centers and Mission Critical Facilities Operations Policy)

More information

2006 APC corporation. Cooling Solutions and Selling Strategies for Wiring Closets and Small IT Rooms

2006 APC corporation. Cooling Solutions and Selling Strategies for Wiring Closets and Small IT Rooms Cooling Solutions and Selling Strategies for Wiring Closets and Small IT Rooms Agenda Review of cooling challenge and strategies Solutions to deal with wiring closet cooling Opportunity and value Power

More information

Technical specifications. Containerized data centre NTR CDC 40f+

Technical specifications. Containerized data centre NTR CDC 40f+ Technical specifications Containerized data centre NTR CDC 40f+ CONTENT 1 THE PURPOSE OF CDC... 3 2 BASIC PROPERTIES... 4 2.1 EXTERNAL DIMENSIONS... 4 2.2 INSTALLATION REQIREMENTS... 4 2.3 ELECTRICAL CONNECTION

More information

Data Center Components Overview

Data Center Components Overview Data Center Components Overview Power Power Outside Transformer Takes grid power and transforms it from 113KV to 480V Utility (grid) power Supply of high voltage power to the Data Center Electrical Room

More information

ANSI/TIA-942 Telecommunications Infrastructure Standards for Data Centers

ANSI/TIA-942 Telecommunications Infrastructure Standards for Data Centers ANSI/TIA-942 Telecommunications Infrastructure Standards for Data Centers Jonathan Jew jew@j-and and-m.com J&M Consultants, Inc Co-chair TIA TR-42.1.1 data center committee Co-chair BICSI data centers

More information

Data Centre Services. JT Rue Des Pres Data Centre Facility Product Description

Data Centre Services. JT Rue Des Pres Data Centre Facility Product Description JT Rue Des Pres Data Centre Facility Product Description JT s Data Centre Hosting Service provides a secure computer room environment with protected and backup power, security and bandwidth. Data Centre

More information

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings WHITE PAPER Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings By Kenneth G. Brill, Upsite Technologies, Inc. Lars Strong, P.E., Upsite Technologies, Inc. 505.798.0200

More information

JAWAHARLAL NEHRU UNIVERSITY

JAWAHARLAL NEHRU UNIVERSITY School of Biotechnology JAWAHARLAL NEHRU UNIVERSITY New Delhi 110 067 Tender No. JNU/SBT/DBT-BUILDER/Data Centre/2015-16 Sealed Quotation for the establishment of a Data Centre for High-End Computational

More information

IBM Twin Data Center Complex Ehningen Peter John IBM BS [email protected]. 2011 IBM Corporation

IBM Twin Data Center Complex Ehningen Peter John IBM BS peter.john@de.ibm.com. 2011 IBM Corporation IBM Twin Data Center Complex Ehningen Peter John IBM BS [email protected] Overview Profile IBM owned facility 6447 m² IT-Space Infrastructure concurrent maintainable (Tier Level 3) Feed-ins of Power

More information

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency A White Paper from the Experts in Business-Critical Continuity TM Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency Executive Summary Energy efficiency

More information

I.S. 3218 :2013 Fire Detection & Alarm Systems

I.S. 3218 :2013 Fire Detection & Alarm Systems I.S. 3218 :2013 Fire Detection & Alarm Systems Overview of significant changes 26 th March 2014 FPS Ltd Today s Programme Commencement Transition Competence & Qualifications System Certification System

More information

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings WH I TE PAPE R AisleLok Modular Containment vs. : A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings By Bruce Long, Upsite Technologies, Inc. Lars Strong, P.E., Upsite Technologies,

More information

IT White Paper MANAGING EXTREME HEAT: COOLING STRATEGIES FOR HIGH-DENSITY SYSTEMS

IT White Paper MANAGING EXTREME HEAT: COOLING STRATEGIES FOR HIGH-DENSITY SYSTEMS IT White Paper MANAGING EXTREME HEAT: COOLING STRATEGIES FOR HIGH-DENSITY SYSTEMS SUMMARY As computer manufacturers pack more and more processing power into smaller packages, the challenge of data center

More information

CoolTeg Plus Chilled Water (CW) version: AC-TCW

CoolTeg Plus Chilled Water (CW) version: AC-TCW version: 12-04-2013 CONTEG DATASHEET TARGETED COOLING CoolTeg Plus Chilled Water (CW) version: AC-TCW CONTEG, spol. s r.o. Czech Republic Headquarters: Na Vítězné pláni 1719/4 140 00 Prague 4 Tel.: +420

More information

Energy Efficient Server Room and Data Centre Cooling. Computer Room Evaporative Cooler. EcoCooling

Energy Efficient Server Room and Data Centre Cooling. Computer Room Evaporative Cooler. EcoCooling Energy Efficient Server Room and Data Centre Cooling Computer Room Evaporative Cooler EcoCooling EcoCooling CREC Computer Room Evaporative Cooler Reduce your cooling costs by over 90% Did you know? An

More information

DATA CENTER RACK SYSTEMS: KEY CONSIDERATIONS IN TODAY S HIGH-DENSITY ENVIRONMENTS WHITEPAPER

DATA CENTER RACK SYSTEMS: KEY CONSIDERATIONS IN TODAY S HIGH-DENSITY ENVIRONMENTS WHITEPAPER DATA CENTER RACK SYSTEMS: KEY CONSIDERATIONS IN TODAY S HIGH-DENSITY ENVIRONMENTS WHITEPAPER EXECUTIVE SUMMARY Data center racks were once viewed as simple platforms in which to neatly stack equipment.

More information

Strategies for Deploying Blade Servers in Existing Data Centers

Strategies for Deploying Blade Servers in Existing Data Centers Strategies for Deploying Blade Servers in Existing Data Centers By Neil Rasmussen White Paper #125 Revision 1 Executive Summary When blade servers are densely packed, they can exceed the power and cooling

More information

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Prepared for the U.S. Department of Energy Federal Energy Management Program By Lawrence Berkeley National Laboratory

More information

HIGHLY EFFICIENT COOLING FOR YOUR DATA CENTRE

HIGHLY EFFICIENT COOLING FOR YOUR DATA CENTRE HIGHLY EFFICIENT COOLING FOR YOUR DATA CENTRE Best practices for airflow management and cooling optimisation AIRFLOW MANAGEMENT COLD AND HOT AISLE CONTAINMENT DC ASSESSMENT SERVICES Achieving true 24/7/365

More information

Benefits of. Air Flow Management. Data Center

Benefits of. Air Flow Management. Data Center Benefits of Passive Air Flow Management in the Data Center Learning Objectives At the end of this program, participants will be able to: Readily identify if opportunities i where networking equipment

More information

Green Data Centre Design

Green Data Centre Design Green Data Centre Design A Holistic Approach Stantec Consulting Ltd. Aleks Milojkovic, P.Eng., RCDD, LEED AP Tommy Chiu, EIT, RCDD, LEED AP STANDARDS ENERGY EQUIPMENT MATERIALS EXAMPLES CONCLUSION STANDARDS

More information

Virtual Data Centre Design A blueprint for success

Virtual Data Centre Design A blueprint for success Virtual Data Centre Design A blueprint for success IT has become the back bone of every business. Advances in computing have resulted in economies of scale, allowing large companies to integrate business

More information

CANNON T4 MINI / MICRO DATA CENTRE SYSTEMS

CANNON T4 MINI / MICRO DATA CENTRE SYSTEMS CANNON T4 MINI / MICRO DATA CENTRE SYSTEMS Air / Water / DX Cooled Cabinet Solutions Mini Data Centre All in One Solution: Where there is a requirement for standalone computing and communications, or highly

More information

Specifying an IT Cabling System

Specifying an IT Cabling System Specifying an IT Cabling System This guide will help you produce a specification for an IT cabling system that meets your organisation s needs and gives you value for money. You will be able to give your

More information

The New Data Center Cooling Paradigm The Tiered Approach

The New Data Center Cooling Paradigm The Tiered Approach Product Footprint - Heat Density Trends The New Data Center Cooling Paradigm The Tiered Approach Lennart Ståhl Amdahl, Cisco, Compaq, Cray, Dell, EMC, HP, IBM, Intel, Lucent, Motorola, Nokia, Nortel, Sun,

More information

EXAMPLE OF A DATA CENTRE BUILD

EXAMPLE OF A DATA CENTRE BUILD EXAMPLE OF A DATA CENTRE BUILD BT Harmondsworth Data Centre Hall 2 Upgrade Temperature Control have been involved in providing specialist to data centres for over 20 years, starting from providing specialist

More information

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions Data Centers Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions 1 Introduction The growing speed and footprint of data centers is challenging

More information

Managing Cooling Capacity & Redundancy In Data Centers Today

Managing Cooling Capacity & Redundancy In Data Centers Today Managing Cooling Capacity & Redundancy In Data Centers Today About AdaptivCOOL 15+ Years Thermal & Airflow Expertise Global Presence U.S., India, Japan, China Standards & Compliances: ISO 9001:2008 RoHS

More information

RVH/BCH Data Centres Proposals for Surge Protection and Ventilation

RVH/BCH Data Centres Proposals for Surge Protection and Ventilation REPORT Proposals for Surge Protection and Ventilation BELFAST HEALTH AND SOCIAL CARE TRUST ROYAL GROUP OF HOSPITALS GROSVENOR ROAD BELFAST BT12 6BA Rev. 01 6th September 2013 Table of Contents 1.0 Ventilation

More information

Data Centre Best Practices Summary

Data Centre Best Practices Summary Data Centre Best Practices Summary Delivering Successful Technology Rooms 1 END2END s Data Centre Best Practices END2END design Data Centres/Computer Rooms based on 7 major principles: Understanding the

More information

Data Centre Services. JT First Tower Lane Data Centre Facility Product Description

Data Centre Services. JT First Tower Lane Data Centre Facility Product Description JT First Tower Lane Data Centre Facility Product Description JT s Data Centre Hosting Service provides a secure computer room enviroment with protected incoming power supplies, state-of-the-art security

More information

Rack Hygiene. Data Center White Paper. Executive Summary

Rack Hygiene. Data Center White Paper. Executive Summary Data Center White Paper Rack Hygiene April 14, 2011 Ed Eacueo Data Center Manager Executive Summary This paper describes the concept of Rack Hygiene, which positions the rack as an airflow management device,

More information

TIA 569 Standards Update Pathways & Spaces. Ignacio Diaz, RCDD April 25, 2012

TIA 569 Standards Update Pathways & Spaces. Ignacio Diaz, RCDD April 25, 2012 TIA 569 Standards Update Pathways & Spaces Ignacio Diaz, RCDD April 25, 2012 The purpose of this Standard is to standardize specific pathway and space design and construction practices in support of telecommunications

More information

Reducing Room-Level Bypass Airflow Creates Opportunities to Improve Cooling Capacity and Operating Costs

Reducing Room-Level Bypass Airflow Creates Opportunities to Improve Cooling Capacity and Operating Costs WHITE PAPER Reducing Room-Level Bypass Airflow Creates Opportunities to Improve Cooling Capacity and Operating Costs By Lars Strong, P.E., Upsite Technologies, Inc. 505.798.000 upsite.com Reducing Room-Level

More information

INSTALLATION GUIDELINES for SOLAR PHOTOVOLTAIC SYSTEMS 1

INSTALLATION GUIDELINES for SOLAR PHOTOVOLTAIC SYSTEMS 1 City of Cotati Building Division 201 W. Sierra Ave. Cotati, CA 94931 707 665-3637 Fax 792-4604 INSTALLATION GUIDELINES for SOLAR PHOTOVOLTAIC SYSTEMS 1 Any PV system on a new structures should be included

More information

HEAVY DUTY STORAGE GAS

HEAVY DUTY STORAGE GAS Multi-Fin flue technology Flue damper saves energy Electronic controls HEAVY DUTY STORAGE GAS Dependability The Rheem heavy duty gas range is the work horse of the industry having proved itself over many

More information

NUIG Campus Network - Structured Cabling System Standards Information s Solutions & Services (ISS)

NUIG Campus Network - Structured Cabling System Standards Information s Solutions & Services (ISS) NUIG Campus Network - Structured Cabling System Standards Information s Solutions & Services (ISS) Revision date 28 th January 2014 The purpose of this document is to assist Building Refurbishment and

More information

Enclosure and Airflow Management Solution

Enclosure and Airflow Management Solution Enclosure and Airflow Management Solution CFD Analysis Report Des Plaines, Illinois December 22, 2013 Matt Koukl- DECP Mission Critical Systems Affiliated Engineers, Inc. Contents Executive Summary...

More information

ADC-APC Integrated Cisco Data Center Solutions

ADC-APC Integrated Cisco Data Center Solutions Advanced Infrastructure Solution for Cisco Data Center 3.0 Ordering Guide ADC-APC Integrated Cisco Data Center Solutions The evolution of data centers is happening before our eyes. This is due to the

More information

Increasing Data Center Efficiency by Using Improved High Density Power Distribution

Increasing Data Center Efficiency by Using Improved High Density Power Distribution Increasing Data Center Efficiency by Using Improved High Density Power Distribution By Neil Rasmussen White Paper #128 Executive Summary A new approach to power distribution for high density server installations

More information

Elements of Energy Efficiency in Data Centre Cooling Architecture

Elements of Energy Efficiency in Data Centre Cooling Architecture Elements of Energy Efficiency in Data Centre Cooling Architecture Energy Efficient Data Center Cooling 1 STULZ Group of Companies Turnover 2006 Plastics Technology 400 Mio A/C Technology 200 Mio Total

More information

I.S. 3218 :2013 Fire Detection & Alarm Systems

I.S. 3218 :2013 Fire Detection & Alarm Systems I.S. 3218 :2013 Fire Detection & Alarm Systems Overview of significant changes 26 th March 2014 FPS Ltd Today s Programme Commencement Transition Competence & Qualifications System Certification System

More information