Raised Floors vs Hard Floors for Data Center Applications

Size: px
Start display at page:

Download "Raised Floors vs Hard Floors for Data Center Applications"

Transcription

1 Raised Floors vs Hard Floors for Data Center Applications White Paper 19 Revision 3 by Neil Rasmussen Executive summary Raised floors were once a standard feature of data centers, but over time a steadily growing fraction of data centers are built on hard floors. Many of the traditional reasons for the raised floor no longer exist, and some of the costs and limitations that a raised floor creates are avoidable by using hard-floor designs. This paper discusses factors to consider when determining whether a data center should use a raised floor or a hard floor design. Revision notice This paper has been updated in 2014 to reflect more recent trends and practices regarding floor designs. by Schneider Electric White Papers are now part of the Schneider Electric white paper library produced by Schneider Electric s Data Center Science Center

2 Introduction The basic science and engineering of the raised floor was fully developed in the 1960s and was described in detail in the 1983 US Federal Information Processing Standard 94. The essential design of raised floors for data centers has remained relatively unchanged for 40 years. In the telecommunications business the raised floor has never been common. The convergence of telecommunications and IT systems raised the question of how to decide which approach to use. Recently, more and more IT data centers are being constructed without the use of the raised floor. A review of the history, capabilities, and limitations of the raised floor offers insight into this trend and provides guidance on the most appropriate solution for specific data center applications. Elements of the raised floor The raised floor was developed and implemented as a system intended to provide the following functions: A cold air distribution system for cooling IT equipment Tracks, conduits, or supports for data cabling A location for power cabling A copper ground grid for grounding of equipment A location to run chilled water or other utility piping To understand the evolution of the raised floor, it is important to examine each of these functions and what original requirement caused the raised floor to be the appropriate solution. In addition, it is useful to see how the original requirement has changed over time. In the following sections, the original and current requirements relating to the above functions are contrasted. The raised floor as a cold air distribution system for cooling IT equipment Early data centers contained equipment of many different shapes and sizes placed at unstructured locations so it was impossible to plan in advance where cooling would be needed. Therefore the ability to locate vented tiles to supply cooling where needed was necessary. Today, the standardization of IT equipment form factors and airflow allows users to plan in advance where rows of IT equipment will be located, and provide cable trays, power distribution, and cooling to predetermined locations. This allows for better use of space and well defined airflows, yielding significantly higher equipment density and energy efficiency. Therefore, the need to move cooling pathways is greatly reduced or eliminated in modern data centers. Some early IT equipment actually required that air be provided through the bottom of the cabinet. This has changed and now virtually all equipment uses front to back airflow, allowing use in both raised floor and hard floor environments. Early data centers operated at less than 2kW per cabinet. At this power density, a data center could reliably operate with a raised floor of 0.5m (1.6 feet) depth without creating hot spots. Today s data centers operate with an average cabinet power density of 5-10kW per cabinet or even higher. To supply air efficiently and uniformly at this power density with a raised floor requires the floor depth to be 1m (3.3 feet) or more, with no underfloor obstruc- Schneider Electric Data Center Science Center Rev 3 2

3 tions. Even with a 1m raised floor, it is difficult to maintain uniformity of airflow, and often some vented tiles actually suck air downward due to the venturi effect. For more information on this topic, see White Paper 121, Airflow Uniformity Through Perforated Tiles in a Raised- Floor Data Center. Air conditioners for early data centers were specifically designed to push air down into a raised floor. To design using this equipment required a raised floor. Today, data center air conditioners are available in many configurations, including row-based air conditioners, fresh and indirect fresh air economizers, and rear-door heat exchangers. None of these solutions require a raised floor, and some actually work better on a hard floor. Even the legacy CRAC and CRAH type air conditioners offer floor stand options which allow them to be used on a hard floor. The raised floor as a path for data cabling In early data centers a variety of bulky multi-conductor copper cables connected between IT cabinets. These cables needed to be as short as possible to avoid signal degradation. The raised floor was an ideal location to route these cables, and the presence of cables did not impact cooling significantly because the underfloor airflow volume was low enough. Today, data center interconnection cables are either fiber or high bandwidth Ethernet, capable of operating over much longer distances. Easy access to change cables is a common requirement, and it is important that cables not impede the airflow to high density IT equipment. The raised floor was the only practical way to meet the original requirement, but is no longer necessary and is poorly suited to the current requirement due to the difficulty of data cable access and the better alternative of using overhead cable trays. For this reason, most new data centers today that use a raised floor have some or all data cabling overhead to maximize underfloor airflow. The raised floor as a path for power cabling In early data centers IT equipment was often hard-wired by electricians using dedicated circuits. Equipment often required that power connections enter through the bottom. The raised floor provided the best way to run circuits to early IT equipment, and operators could access and change circuits by removing floor tiles. Today, IT equipment and cabinets are designed to allow power connections through the top or bottom. Using modular PDUs or overhead busway allows much more convenient circuit changes than working with underfloor power whips and conduit. The airflow requirements for high density equipment can be significantly impeded by underfloor power cables. A major problem with raised floors is the air leakage around power cables at the PDU and at the rack cabinets, which can significantly reduce the energy efficiency of the data center. See White Paper 159, How Overhead Cabling Saves Energy in Data Centers. The raised floor was well-suited to the original requirements for power distribution but today has major disadvantages regarding airflow blockage, air leakage, and access for wiring changes. The increasing use of overhead busway and the desire to ensure wiring does not block airflow has led to the common use of overhead power distribution in both raised floor and hard floor designs. Therefore, the raised floor no longer has an important role in power distribution. The raised floor as a ground grid for grounding of equipment Early data centers depended on solid grounding between interconnected IT equipment to preserve signal integrity for ground referenced communication such as parallel communica- Schneider Electric Data Center Science Center Rev 3 3

4 tion busses and RS-232 signals. Data centers were commonly constructed with a copper signal reference grid to which all equipment was bonded with a special (typically a flat braided) grounding conductor. Signal reference grids needed to be close to IT devices and were typically located under the raised floor or even integrated into the raised floor framework. Today, all copper communication technology such as Ethernet, RS-485, or optical fiber is balanced and/or galvanically isolated and no longer depends on grounding between IT devices. Grounding remains essential for safety, but this function is provided through every plugged connection by the grounding wire supplied by every branch circuit. Therefore, the signal reference grid function which was often implemented as part of a raised floor system is no longer necessary. In fact, in legacy data centers equipped with a signal reference grid, it is quite uncommon to find any equipment connected to it. The raised floor as a location to run chilled water or other utility piping The raised floor was the only practical way for early data centers to meet the requirement to deliver water piping to water-cooled IT equipment such as early mainframe computers. Today, a very small number of water-cooled IT devices are found in data centers, but some newer cooling system designs still require water distribution within the IT space, including designs based on: Row-based chilled water coolers Rear door heat exchangers CRAH units distributed within IT space to meet high density requirement Direct water-cooled IT devices The raised floor remains a natural location for water supply systems in cases where water is required throughout the IT room. Note that in traditional CRAH systems where the air conditioners are located at the periphery of the IT room, it is practical to deliver water to those units via pipes on or through walls without a raised floor. Overhead water distribution systems are available and are commonly used for row-based chilled water cooling (see Figure 4 described later in this paper). However, while overhead cooling is practical for row-based cooling and an occasional water cooled IT device, it remains impractical for rear door heat exchangers (due to the quantity of pipes) and for CRAH units located centrally in the IT space (due to the size of the pipes). If underfloor piping is selected, then a raised floor is required but it only needs to have a depth of 0.4m (16 inches) or less. The decision to use water cooling for IT devices, IT pods, or row-based coolers is one of the most compelling reasons to consider a raised floor, but in this case the raised floor does not handle airflow so it can be lower in height and cost, and avoids many of the other challenges discussed later in the paper that are caused by making a deep raised floor. Precautions when using a raised floor The examination above indicates that the raised floor was a very effective and practical way to meet the original requirements of early data centers. It is also apparent that many of the original requirements that dictate the use of the raised floor no longer exist. In fact, data center requirements have evolved and changed significantly. It is important to consider the potential problems that are unique to the raised floor, which need to be considered when making a choice between a raised floor and a hard floor design. Schneider Electric Data Center Science Center Rev 3 4

5 Earthquake The raised floor greatly increases the difficulty of assuring or determining a seismic rating for a data center. Supporting equipment above the floor on a grid greatly compromises the ability to anchor equipment. Because each installation is different, it is almost impossible to test or validate the seismic rating of an installation. This is a very serious problem in cases where a seismic withstand capability is specified. In and around Kobe Japan, during the great earthquake of 1995, data centers experienced an extraordinary range of earthquake damage. Many data centers which should have been operational within hours or days were down for more than a month when a large number of supposedly earthquake-rated raised floor systems buckled, sending IT equipment crashing through the floor. Damaged equipment needed to be pulled out and repaired or replaced in complex and time consuming operations. During the World Trade Center collapse of 2001, nearby data centers which should have survived the tragedy were seriously damaged and experienced extended down time when vibrations experienced by the buildings caused raised floor systems to buckle and collapse. A down time of 5 weeks as was typical near Kobe, corresponds to 50,000 minutes as compared with the 5 minutes per year of downtime required to achieve 5-nines reliability. This is 10,000 times worse than the 5-nines design value. If earthquake downtime is considered 10% of the availability budget, then the data centers near Kobe could not achieve 5-nines reliability unless an earthquake of that magnitude were to occur only once every 100,000 years, which would not be a realistic assumption. When a raised floor is used, there are a number of factors that contribute to the earthquake resilience of the system. First, the design of the raised floor pedestals and stringers must be sufficiently robust for the load requirement. Second, the system must be supported with appropriate bracing, which must be more complex and expensive as the height of the floor is increased to handle modern IT equipment. Third, during stress the system can generate high longitudinal forces against the perimeter walls, which must be engineered to withstand this force or the system will collapse. Fourth, the tiles must be installed at all times as the tiles are a critical part of the strength of the system. Many otherwise well designed data centers frequently violate this fourth requirement as tiles are routinely removed for access to underfloor areas. To verify all the conditions of earthquake resistance, engineers can use mathematical models which simulate the resistance of the structure and the magnitude of the strengths generated by the simulated event, which generates a stress analysis such as shown in Figure 1. Then, the engineers have to verify that the walls and floor system are designed to withstand the expected forces. All this engineering process could discourage using raised floor in highly seismic areas. Figure 1 Example of an engineering stress analysis of a raised floor Schneider Electric Data Center Science Center Rev 3 5

6 Earthquake resilience is one of the reasons why telephone central office facilities do not use raised floors and is a reason why more high availability data centers are using hard floor designs. Access The fact that equipment turnover in a modern data center is around two years gives rise to the situation where data and power cabling is subject to frequent change. Cables are accessible under a raised floor when tiles are lifted, but the matrix of stringers can make it impractical to modify the cable paths. The impact of cables on airflow is typically not modeled during the design of the floor, and a common problem is that cables restrict airflow and create overheating of IT equipment. The use of underfloor cable trays to guide cabling often makes the airflow problems even worse. Removing tiles in a high density data center (>6kW average per rack) for purposes of cable access can significantly disrupt the airflow to other IT cabinets, especially if multiple tiles are lifted at once. For these reasons, if a raised floor is used for airflow in a high density environment, it is not advisable to locate power or data cables underfloor. Since hard floor data center designs already need overhead cabling, this essentially means that all new high density data centers, whether raised floor or hard floor, should use overhead cabling. Note that overhead cable trays also create special hazards because cable changes require personnel to work from ladders, with the associated safety issues. Floor loading Typical equipment racks can reach 1000kg (2000lb) in weight capacity, and may need to be rolled to be relocated. In addition, the equipment used to move and locate equipment needs data center access. Special reinforcement of the underfloor support structures may be required in a raised floor environment, and in some cases the load capability may be restricted to certain aisles. Ensuring floor loading requirements are not exceeded requires significant cost and planning. The full load capability of a raised floor is only realized when all of the tiles are in place. The buckling (lateral) strength of the floor is increased by the presence of the tiles. However, individual tiles and even entire rows of tiles are seen routinely pulled in a data center when required cabling changes or maintenance are performed. Ideally, the raised floor should be designed so that its structural integrity does not depend on the tiles being installed, however this may add extra cost and complexity to the system. Loss of data center space to ramps In almost most cases, the installation of a raised floor requires that ramps be provided to allow people and equipment to move up from the building floor level to the raised floor level 1. A ramp is required at all main egress points, so most data centers require at least two ramps. These ramps and their surrounding areas consume a considerable amount of space, especially in a high density data center where the total height of the floor requires a longer ramp. The typical maximum slope of a ramp is a pitch of 1:12, which means a 1 meter raised floor requires a ramp of 12 meters (39 feet) long. The total space consumed by such a ramp with appropriate width, landings, etc is typically around 15 m 2 (161.5 ft 2 ), corresponding to 30 m 2 (323) ft 2 ) for two ramps. This may not be a significant area for a low density data center, 1 Mechanical lifts are an alternative to ramps that consume less space but are more costly. Schneider Electric Data Center Science Center Rev 3 6

7 but for a high density data center located in a commercial building this can be a significant cost or a loss of space of more than 10 IT cabinets. Headroom In some potential data center locations the loss of headroom resulting from the installation of a raised floor is not acceptable. This can limit the options for locating a data center or create extra costs. In Japan, for example, it is sometimes necessary for the floor of the next overhead level of the building to be cut out to create a double-height space in order to gain the headroom required to install a raised floor. Conduit When cabling is run under a raised floor it may become subject to special fire regulations. The raised floor is considered under some construction codes to be an air plenum. Due to the moving and distributed air many fire codes consider a fire in an air plenum to be a special risk. Therefore, cabling under the raised floor is often required to be enclosed in fire-rated conduit, which may be metal or a special fire rated polymer. The result is considerable cost and complexity to install this conduit, and a particularly difficult problem when conduit changes are required in an operating data center. This situation will vary based on local regulations. Security The raised floor is a space where people or devices may be concealed. In the case of data centers which are partitioned with cages, such as co-location facilities, the raised floor represents a potential way to enter and access caged areas, especially as the depth of raised floors creates significant underfloor space. Some co-location facilities that do not use a raised floor cite this as a benefit of eliminating the raised floor. Power distribution The number of branch circuits per square foot in the modern data center is much greater than it was at the time when the raised floor architecture was developed. During the mainframe era, a single hardwired high amperage branch circuit could service a cabinet using 6 floor tiles or 2.2 m 2 (24 ft 2 ). Today, this same area could contain two racks, each of which could require 12 kw of 120V circuits with an A and B feed, for a total of 12 branch circuits. The density of the resulting conduits associated with this dramatic increase in branch circuits represents a serious obstacle to underfloor air flow as shown in Figure 2. This can require a raised floor height of 1.2 m (4 ft) to ensure the needed airflow. Figure 2 Example of how cabling can block airflow under a raised floor Increasing the height of the raised floor to create additional space for cables compromises the structural integrity and compounds cost, floor loading, and earthquake issues. Again this Schneider Electric Data Center Science Center Rev 3 7

8 indicates that when a raised floor is used for air distribution in a high density data center, the power distribution should be located overhead to avoid compromising the airflow. Cleaning The raised floor is an area which is not convenient to clean. Dust, grit, and various items normally accumulate under the raised floor and are typically abandoned there since the difficulty and accident risk associated with cleaning this area are considered to be serious obstacles. The act of removing a floor tile may cause dramatic shifts in the air motion under the floor, which can and has caused grit or even objects to be blown out into equipment or the eyes of personnel. Therefore, consideration of a raised floor data center must come with the commitment for mandatory maintenance and cleaning generally in the form of a professional cleaning service contract. Cabling tends to be abandoned under raised floors during transitions. Cabling is frequently not removed because it is difficult to extract without potentially disturbing other cabling. Over time, a considerable amount of unused cable accumulates underfloor, blocking airflow. Some vendors like IBM offer special services to help customers identify and extract excess underfloor cabling. Safety A tile left open poses a severe and unexpected risk to operators and visitors moving in the data center. In data centers with 1.2 m (4 ft) or higher raised floors, the risk of death resulting from a fall into an open tile location increases greatly. People operating in a raised floor environment should be properly trained to mark the area of operation and put in place all the typical precautions to avoid accidents. Cost The raised floor represents a significant cost. The typical cost of raised floor including engineering, material cost, fabrication, installation, and inspection is on the order of $215 per square meter ($20 per square foot). Furthermore, the maximum space that might be ultimately utilized by the data center is normally built-out with a raised floor whether or not the current, near term, or even the actual ultimate requirement requires the use of this space. The cost of $215 per square meter does not include the extra costs associated with power and data cabling. This also does not include any modelling, structural engineering, reinforcement, or changes to the data center walls required to allow the system to meet any seismic requirements. These components may add up to a considerable cost, which should only be incurred if it is actually required. Reasons for designing with a raised floor Although more and more installations have eliminated the raised floor due to the above reasons, many data centers are still designed using a raised floor. Interviews with users of raised floors conducted by Schneider Electric have identified the following reasons for this choice: Perception The raised floor is an icon symbolizing the high availability enterprise data center. For many companies, the presentation of their data center is an important part of facility tours for key customers. When raised floors are used for underfloor cabling, the data center presents a visually neater look. The raised floor is typically much whiter in color than a hard floor so the Schneider Electric Data Center Science Center Rev 3 8

9 data center illumination levels are brighter. Data centers that do not have a raised floor may be perceived by visitors or customers as incomplete or deficient or less than the highest quality. The raised floor is sometimes used to reinforce the image of a data center, even when not used for cooling. In some cases, raised floors have been installed that are not used for cooling or wiring and, in fact, have no function at all other than to create the desired image, for example by gluing tiles down to a hard floor. This issue of image is a major barrier to the elimination of the raised floor, which is slowly changing as more and more data centers utilize hard-floor designs. Familiarity There is much more collective experience in the design of raised floor air distribution design and designers are therefore more confident predicting system performance. Some data center designers have always designed with a raised floor and are therefore uncomfortable proposing different designs. Many data center professionals have only managed raised floor data centers in the past and are uncomfortable working with a new design. The means of cooling in a hard floor data center are not fully understood 2 by all operators and there is a misperception that hard floor designs are only suitable for hyperscale data centers and small server rooms. Location for water piping An increasing number of data centers utilize close coupled coolers such as row-based air conditioners or rear-door heat exchangers. In a few cases IT equipment is water cooled. These systems require water distribution piping within the data center. While the water may be distributed overhead or underfloor, some users prefer the underfloor location as it tends to make the data center look less cluttered and it avoids the issue of any potential overhead piping leaks or condensation. Such a raised floor system does not distribute air and needs to be only high enough to contain the piping, so the raised floor can be less than 0.5m (1.6 ft) high, simplifying the design and reducing cost. An example of this use is shown in Figure 3. This is a very appropriate use for a raised floor, even though it is not used to distribute air. Figure 3 A raised floor that is used for water pipe supply to rowbased coolers in a high density application and not used for airflow. The raised floor would need to be much higher to supply the required air 2 Raised floor data centers are simple to understand because there is an obvious delivery of cold air near server inlets. Hard floor data centers almost always work on the principle of scavenging hot exhaust air and removing heat so that all ambient air is cool, so there is often no clearly identifiable source of cool air near the IT loads. While technically this hard-floor approach is actually more efficient, it is more difficult to visualize its function. Schneider Electric Data Center Science Center Rev 3 9

10 Cooling design Data center designers and operators value the flexibility that is provided by raised floor cooling designs. The raised floor provides opportunity for the data center operator, without the need for specialized contractors, to move air vent tiles around in order to achieve a desired temperature profile. Such ad-hoc changes are much more difficult in a system using overhead ductwork or using water piping. Some operators feel that without a raised floor they are giving up some of their ability to manage hot spots or control cooling. Designing without a raised floor The costs and problems associated with a raised floor can be eliminated only if a practical and available alternative exists. Fortunately, there is now a considerable amount of experience designing hard-floor data centers. The key issues of how to cable and how to cool in a hard floor environment are addressed: Cabling in a hard-floor environment The previous discussion explained why overhead cabling is a best practice for any data center, even if it uses a raised floor for cooling. Data cabling is located in overhead cable troughs, and power cabling is either in cable troughs or distributed via busway. Therefore, the requirements for cabling in a hard-floor or raised floor environment are essentially the same. However, in a hard floor environment it is common to utilize a suspended ceiling as an air return (as described later), so adequate space for cable trays must typically be provided in the space above the IT cabinets and below the suspended ceiling. Cooling in a hard-floor environment The raised floor, when operating ideally, distributes cool air close to the inlets of the IT equipment. An alternative approach must provide an equivalent or better function to ensure that IT equipment is provided cool inlet air, and not fed the hot exhaust air generated by neighboring IT equipment. There are four basic ways to accomplish this in a hard floor environment: Rear door heat exchangers In this case individual racks are provided with cooling coils and tempered chilled water is supplied to each rack. The thermal output of a rack is essentially neutralized at the rack by the cooing coil, so the rack exhausts cool air to the room. In this way all IT equipment has access to cool inlet air without the need for CRACs or air distribution systems. Minimal supplemental cooling is required. This approach can be very expensive and is mainly applied when there is a zone in a data center of very high density (20kW or greater average over an area). This can be deployed on a hard floor, but is often found deployed on raised floors where the floor has been found incapable of providing the necessary airflow because of limited height or underfloor congestion. It is the least common approach used to provide cooling in a hard-floor environment. Row-based cooling with hot aisle containment In this case groups of racks are configured in a pod and row-based coolers are located within the pods. The hot aisle of the pod is enclosed to contain the exhaust air and ensure it is directly processed by the row-based coolers. The coolers discharge cool air into the room so all IT equipment is provided cool intake air. This approach is very effective in small to medium sized data centers in commercial office buildings. An example of how the water piping is supplied to row-based coolers on a hard floor environment is shown in Figure 4 below. It is also widely used to allow deployment of high density equipment on raised floors, where the existing raised floor has insufficient cooling capacity. Suspended ceiling return to CRAC with vented ceiling tiles This system can be viewed as an upside-down raised floor system. Instead of providing vented floor tiles Schneider Electric Data Center Science Center Rev 3 10

11 on the supply side at the IT inlets, in this system vented ceiling tiles are provided at the exhaust side of the IT equipment. Instead of ducting the supply from the CRAC units to the raised floor, the air return of CRAC units is ducted to the suspended ceiling. This system exhibits performance similar to that of a raised floor, but without ramps and other drawbacks of the raised floor, and at a much lower cost. However, the airflows are not fully contained so this approach has many of the inefficiencies of the raised floor Hot aisle containment with return to central cooling plant In this system the IT exhaust air is captured and ducted through an overhead plenum back to a central cooling plant. The containment of the exhaust air with full separation from the IT inlet air increases the return air temperature, increasing the efficiency and capacity of the cooling plant. The plant may be traditional CRAC units, but can also utilize high efficiency direct or indirect fresh air economizer systems. When using an economizer system, this approach has the highest cooling energy efficiency which is a key reason why it is the approach used in most hyperscale data centers. However, this approach has also been successfully used in co-location facilities and commercial data centers of various sizes. Figure 4 Overhead water supply to row-based coolers in a hard floor environment (view is above the racks; vertical black pipes are water supply) The pressures to solve the problems of high density, high efficiency, and predictability have given rise to the new air conditioning technologies described above based on row-oriented and rack-oriented methods. These new air conditioning systems are tightly coupled to the IT loads and integrate into rack cabinets or may be located overhead. Because of the ability of these systems to simultaneously achieve high density, high efficiency, and predictability, their adoption is increasing. One important consequence of these new cooling technologies is that they do not require a raised floor. These technologies are described in more detail in White Paper 130, Choosing Between Room, Row, and Rack-based Cooling for Data Centers. Raised floor vs hard floor air containment High efficiency high density data centers depend on separation of the hot and cool airstreams to maximize cooling capacity and efficiency while eliminating hot spots. This separation takes various forms but when applied to IT pods is typically described as hot or cold aisle containment. In a raised floor environment the most cost effective way to separate the airstreams is by implementing cold aisle containment, while in hard floor environments the most cost effective way to achieve containment is via hot aisle containment. While both approaches achieve separation of the airstreams, in hot aisle containment personnel and ancillary equipment are located in the cool aisle, while with cold aisle containment personnel and ancillary equipment are effectively located in the hot aisle. In practice this requires cold aisle containment systems to operate at lower temperatures to provide an Schneider Electric Data Center Science Center Rev 3 11

12 acceptable temperature environment for personnel. By operating at higher temperatures, hot aisle containment provides higher efficiency and capacity for the same cooling plant. This is why the most efficient hyperscale data centers use hot aisle containment, and as a result they use hard floors. For a more complete discussion of hot vs cold aisle containment, see White Paper 135, Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency. Choice of raised or hard floor for different types of data centers The issues and options described above apply to any data center. However, there are clearly types of data centers where hard floors are an obvious choice, and data centers where raised floors are more appropriate. Conditions favoring raised floors The most compelling reason to use raised floors is where the cooling system requires chilled water delivered into the IT space. As previously discussed, this includes systems based on row-based cooling, rear-door heat exchangers, or systems where it is proposed to locate CRAH units centrally in the IT space (not at the room periphery). In these cases the raised floor does not handle airflow and need not be very deep, reducing cost and improving safety. The second appropriate use for raised floors is for low density data centers where it is difficult or impossible to determine the row locations of IT devices in advance. A very good example of this is caged co-location space. In this case a raised floor of 1M or slightly less can support an average density of approximately 5kW per rack and even function with some obstructions from underfloor cabling (the average density of actual co-location space typically runs around 3-4kW per cabinet). Conditions favoring hard floors Small data centers or server rooms of less than 50 cabinets strongly favor a hard floor design. In these cases the ramps required in a raised floor significantly subtract from the available IT space, which is typically premium space. There are now many cooling technologies available for these smaller rooms that do not use or require a raised floor, such as the popular approach of row-based cooling with overhead piping, or refrigerant based (DX) systems. Hyperscale data centers or data centers based on a standard architecture such as cloudbased systems favor hard floor designs. These facilities operate at very high densities and often utilize fresh air or indirect fresh air economizer systems, which lend themselves to hotaisle containment strategies. For these facilities, a hard floor design is less expensive and actually more energy efficient. Rooms that have low headroom, such as commercial buildings found in Europe and Asia, can be difficult to fit with a raised floor sufficient to achieve the power density required. Retrofitting data centers with an existing raised floor A common problem is when an existing raised-floor data center is stressed by the introduction of IT systems operating at higher power density than the floor was designed to support. The data center begins to experience hot spots which may be difficult to correct, even if additional CRAH units are installed to pressurize the underfloor plenum. Such data centers also tend to have poor energy efficiency because the air conditioners are operated at suboptimal conditions. In these cases, the raised floor is typically 0.5m or less in depth and contains many obstructions to airflow under the floor. Schneider Electric Data Center Science Center Rev 3 12

13 While removal of the raised floor is a possible solution to this problem, it is often completely impractical if the data center must continue to operate during improvements. In these cases considerable improvement is possible by using row-based cooling to supplement the existing raised floor system, or by retrofitting cold aisle containment. For a complete discussion on how to deploy high density IT on an existing low density raised floor, see White Paper 134, Deploying High Density Pods in a Low Density Data Center and White Paper 153, Implementing Hot and Cold Air Containment in Existing Data Centers. Conclusion Many of the reasons that led to the development of the raised floor no longer apply. The absence of a compelling requirement for a raised floor combined with the cost and limitations of a raised floor in supplying energy efficient cooling to high density data centers, suggests that many new data centers should consider designs based on a hard-floor. Hard floor designs are now routinely implemented in all kinds of data centers, and are the dominant form in server rooms and hyperscale data centers. Our experience shows that data center operators do not tend to return to raised floors after they have built a hard floor data center. Nevertheless, data centers are likely to use raised floors for some time due to a large base of experience with raised floor design, traditions of locating piping and wiring under the floor, and the intangible issues of perception and image. Data centers that have existing raised floors often have difficulty supplying sufficient airflow to newer high density IT loads due to underfloor cable congestion and low plenum height, resulting in energy inefficiency and hot-spots. While moving cabling overhead in such situations can improve this condition, cooling solutions such as row-based cooling, hot aisle containment, and cold aisle containment can be applied to high-density pods to reduce the underfloor airflow requirement and extend the data center life. About the author Neil Rasmussen is a Senior VP of Innovation for Schneider Electric. He establishes the technology direction for the world s largest R&D budget devoted to power, cooling, and rack infrastructure for critical networks. Neil holds 25 patents related to high-efficiency and high-density data center power and cooling infrastructure, and has published over 50 white papers related to power and cooling systems, many published in more than 10 languages, most recently with a focus on the improvement of energy efficiency. He is an internationally recognized keynote speaker on the subject of highefficiency data centers. Neil is currently working to advance the science of high-efficiency, highdensity, scalable data center infrastructure solutions and is a principal architect of the APC InfraStruXure system. Prior to founding APC in 1981, Neil received his bachelors and masters degrees from MIT in electrical engineering, where he did his thesis on the analysis of a 200MW power supply for a tokamak fusion reactor. From 1979 to 1981 he worked at MIT Lincoln Laboratories on flywheel energy storage systems and solar electric power systems. Schneider Electric Data Center Science Center Rev 3 13

14 Resources Airflow Uniformity Through Perforated Tiles in a Raised-Floor Data Center White Paper 121 How Overhead Cabling Saves Energy in Data Centers White Paper 159 Choosing Between Room, Row, and Rack-based Cooling for Data Centers White Paper 130 Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency White Paper 135 Implementing Hot and Cold Air Containment in Existing Data Centers White Paper 153 Browse all white papers whitepapers.apc.com Browse all TradeOff Tools tools.apc.com Contact us For feedback and comments about the content of this white paper: Data Center Science Center If you are a customer and have questions specific to your data center project: Contact your Schneider Electric representative at Schneider Electric. All rights reserved. Schneider Electric Data Center Science Center Rev 3 14

Re-examining the Suitability of the Raised Floor for Data Center Applications

Re-examining the Suitability of the Raised Floor for Data Center Applications Re-examining the Suitability of the Raised Floor for Data Center Applications By Neil Rasmussen White Paper #19 Revision 1 Executive Summary The circumstances that gave rise to the development and use

More information

A Scalable, Reconfigurable, and Efficient Data Center Power Distribution Architecture

A Scalable, Reconfigurable, and Efficient Data Center Power Distribution Architecture A Scalable, Reconfigurable, and Efficient Data Center Power Distribution Architecture White Paper 129 Revision 1 by Neil Rasmussen > Executive summary Significant improvements in efficiency, power density,

More information

Strategies for Deploying Blade Servers in Existing Data Centers

Strategies for Deploying Blade Servers in Existing Data Centers Strategies for Deploying Blade Servers in Existing Data Centers By Neil Rasmussen White Paper #125 Revision 1 Executive Summary When blade servers are densely packed, they can exceed the power and cooling

More information

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

Improving Rack Cooling Performance Using Airflow Management Blanking Panels Improving Rack Cooling Performance Using Airflow Management Blanking Panels White Paper 44 Revision 4 by Neil Rasmussen > Executive summary Unused vertical space in open frame racks and rack enclosures

More information

A Scalable, Reconfigurable, and Efficient Data Center Power Distribution Architecture

A Scalable, Reconfigurable, and Efficient Data Center Power Distribution Architecture A Scalable, Reconfigurable, and Efficient Data Center Power Distribution Architecture By Neil Rasmussen White Paper #129 Executive Summary Significant improvements in efficiency, power density, power monitoring,

More information

Calculating Total Cooling Requirements for Data Centers

Calculating Total Cooling Requirements for Data Centers Calculating Total Cooling Requirements for Data Centers White Paper 25 Revision 3 by Neil Rasmussen > Executive summary This document describes how to estimate heat output from information technology (IT)

More information

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

Improving Rack Cooling Performance Using Airflow Management Blanking Panels Improving Rack Cooling Performance Using Airflow Management Blanking Panels By Neil Rasmussen White Paper #44 Revision 3 Executive Summary Unused vertical space in open frame racks and rack enclosures

More information

Avoiding Costs from Oversizing Data Center and Network Room Infrastructure

Avoiding Costs from Oversizing Data Center and Network Room Infrastructure Avoiding Costs from Oversizing Data Center and Network Room Infrastructure White Paper 37 Revision 7 by Neil Rasmussen > Executive summary The physical infrastructure of data centers and network rooms

More information

Avoiding Costs from Oversizing Data Center and Network Room Infrastructure

Avoiding Costs from Oversizing Data Center and Network Room Infrastructure Avoiding Costs from Oversizing Data Center and Network Room Infrastructure White Paper 37 Revision 6 by Neil Rasmussen > Executive summary The physical and power infrastructure of data centers and network

More information

Power and Cooling for Ultra-High Density Racks and Blade Servers

Power and Cooling for Ultra-High Density Racks and Blade Servers Power and Cooling for Ultra-High Density Racks and Blade Servers White Paper #46 Introduction The Problem Average rack in a typical data center is under 2 kw Dense deployment of blade servers (10-20 kw

More information

Overload Protection in a Dual-Corded Data Center Environment

Overload Protection in a Dual-Corded Data Center Environment Overload Protection in a Dual-Corded Data Center Environment White Paper 206 Revision 0 by Neil Rasmussen Executive summary In a dual-corded environment, the loss of power on one path will cause the load

More information

Power and Cooling Guidelines for Deploying IT in Colocation Data Centers

Power and Cooling Guidelines for Deploying IT in Colocation Data Centers Power and Cooling Guidelines for Deploying IT in Colocation Data Centers White Paper 173 Revision 0 by Paul Lin and Victor Avelar Executive summary Some prospective colocation data center tenants view

More information

Creating Order from Chaos in Data Centers and Server Rooms

Creating Order from Chaos in Data Centers and Server Rooms Creating from in Data Centers and Server Rooms White Paper 119 Revision 1 by Dennis Bouley > Executive summary Data center professionals can rid themselves of messy racks, sub-standard under floor air

More information

How Row-based Data Center Cooling Works

How Row-based Data Center Cooling Works How Row-based Data Center Cooling Works White Paper 208 Revision 0 by Paul Lin and Victor Avelar Executive summary Row-based data center cooling is normally regarded as a cold air supply architecture that

More information

Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms

Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms By Neil Rasmussen White Paper #49 Executive Summary Avoidable mistakes that are routinely made when installing cooling

More information

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009 Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009 Agenda Overview - Network Critical Physical Infrastructure Cooling issues in the Server Room

More information

Efficiency and Other Benefits of 208 Volt Over 120 Volt Input for IT Equipment

Efficiency and Other Benefits of 208 Volt Over 120 Volt Input for IT Equipment Efficiency and Other Benefits of 208 Volt Over 120 Volt Input for IT Equipment By Neil Rasmussen White Paper #27 Revision 2 Executive Summary Decisions made regarding the distribution of 208V or 120V power

More information

Increasing Data Center Efficiency by Using Improved High Density Power Distribution

Increasing Data Center Efficiency by Using Improved High Density Power Distribution Increasing Data Center Efficiency by Using Improved High Density Power Distribution By Neil Rasmussen White Paper #128 Executive Summary A new approach to power distribution for high density server installations

More information

Ten Steps to Solving Cooling Problems Caused by High- Density Server Deployment

Ten Steps to Solving Cooling Problems Caused by High- Density Server Deployment Ten Steps to Solving Cooling Problems Caused by High- Density Server Deployment By Peter Hannaford White Paper #42 Revision 1 Executive Summary High-density servers present a significant cooling challenge.

More information

Rittal Liquid Cooling Series

Rittal Liquid Cooling Series Rittal Liquid Cooling Series by Herb Villa White Paper 04 Copyright 2006 All rights reserved. Rittal GmbH & Co. KG Auf dem Stützelberg D-35745 Herborn Phone +49(0)2772 / 505-0 Fax +49(0)2772/505-2319 www.rittal.de

More information

Choosing Close-Coupled IT Cooling Solutions

Choosing Close-Coupled IT Cooling Solutions W H I T E P A P E R Choosing Close-Coupled IT Cooling Solutions Smart Strategies for Small to Mid-Size Data Centers Executive Summary As high-density IT equipment becomes the new normal, the amount of

More information

Creating Order from Chaos in Data Centers and Server Rooms

Creating Order from Chaos in Data Centers and Server Rooms Creating from in Data Centers and Server Rooms By Dennis Bouley White Paper #119 Executive Summary Data center professionals can rid themselves of messy racks, sub-standard under floor air distribution,

More information

High-Efficiency AC Power Distribution for Data Centers

High-Efficiency AC Power Distribution for Data Centers High-Efficiency AC Power Distribution for Data Centers White Paper 128 Revision 2 by Neil Rasmussen > Executive summary The use of 240 volt power distribution for data centers saves floor space, simplifies

More information

Rack Hygiene. Data Center White Paper. Executive Summary

Rack Hygiene. Data Center White Paper. Executive Summary Data Center White Paper Rack Hygiene April 14, 2011 Ed Eacueo Data Center Manager Executive Summary This paper describes the concept of Rack Hygiene, which positions the rack as an airflow management device,

More information

Electrical Efficiency Modeling of Data Centers

Electrical Efficiency Modeling of Data Centers Electrical Efficiency Modeling of Data Centers By Neil Rasmussen White Paper 113 Executive Summary The conventional models used to estimate electrical efficiency of data centers are grossly inaccurate

More information

The Different Types of UPS Systems

The Different Types of UPS Systems Systems White Paper 1 Revision 6 by Neil Rasmussen Contents Click on a section to jump to it > Executive summary There is much confusion in the marketplace about the different types of UPS systems and

More information

APC APPLICATION NOTE #92

APC APPLICATION NOTE #92 #92 Best Practices for Designing Data Centers with the InfraStruXure InRow RC By John Niemann Abstract The InfraStruXure InRow RC is designed to provide cooling at the row and rack level of a data center

More information

APC APPLICATION NOTE #112

APC APPLICATION NOTE #112 #112 Best Practices for Deploying the InfraStruXure InRow SC By David Roden Abstract The InfraStruXure InRow SC (ACSC100 and ACSC101) is a self-contained air conditioner for server rooms and wiring closets.

More information

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings WH I TE PAPE R AisleLok Modular Containment vs. : A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings By Bruce Long, Upsite Technologies, Inc. Lars Strong, P.E., Upsite Technologies,

More information

Calculating Space and Power Density Requirements for Data Centers

Calculating Space and Power Density Requirements for Data Centers Calculating Space and Power Density Requirements for Data Centers White Paper 155 Revision 0 by Neil Rasmussen Executive summary The historic method of specifying data center power density using a single

More information

Cooling Strategies for IT Wiring Closets and Small Rooms

Cooling Strategies for IT Wiring Closets and Small Rooms Cooling Strategies for IT Wiring Closets and Small Rooms White Paper 68 Revision 1 by Neil Rasmussen and Brian Standley > Executive summary Cooling for IT wiring closets is rarely planned and typically

More information

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Dealing with Thermal Issues in Data Center Universal Aisle Containment Dealing with Thermal Issues in Data Center Universal Aisle Containment Daniele Tordin BICSI RCDD Technical System Engineer - Panduit Europe Daniele.Tordin@Panduit.com AGENDA Business Drivers Challenges

More information

The Different Types of UPS Systems

The Different Types of UPS Systems White Paper 1 Revision 7 by Neil Rasmussen > Executive summary There is much confusion in the marketplace about the different types of UPS systems and their characteristics. Each of these UPS types is

More information

Raised Floor Data Centers Prepared for the Future

Raised Floor Data Centers Prepared for the Future Raised Floor Data Centers Prepared for the Future Raised Floors Protect My Investment by Providing a Foundation for Current and Future Technologies. Creating the Future Proof Data Center Environment The

More information

Cooling Options for Rack Equipment with Side-to-Side Airflow

Cooling Options for Rack Equipment with Side-to-Side Airflow Cooling Options for Rack Equipment with Side-to-Side Airflow By Neil Rasmussen White Paper #50 Executive Summary Equipment with side-to-side airflow presents special cooling challenges in today s data

More information

High Density in the Data Center

High Density in the Data Center High Density in the Data Center Choosing and specifying density High density in an existing environment High density in a new environment Presented by: Christopher Eko Business Development - ASEAN All

More information

Electrical Efficiency Modeling for Data Centers

Electrical Efficiency Modeling for Data Centers Electrical Efficiency Modeling for Data Centers By Neil Rasmussen White Paper #113 Revision 1 Executive Summary Conventional models for estimating electrical efficiency of data centers are grossly inaccurate

More information

Benefits of. Air Flow Management. Data Center

Benefits of. Air Flow Management. Data Center Benefits of Passive Air Flow Management in the Data Center Learning Objectives At the end of this program, participants will be able to: Readily identify if opportunities i where networking equipment

More information

Managing Cooling Capacity & Redundancy In Data Centers Today

Managing Cooling Capacity & Redundancy In Data Centers Today Managing Cooling Capacity & Redundancy In Data Centers Today About AdaptivCOOL 15+ Years Thermal & Airflow Expertise Global Presence U.S., India, Japan, China Standards & Compliances: ISO 9001:2008 RoHS

More information

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers Deploying a Vertical Exhaust System www.panduit.com WP-09 September 2009 Introduction Business management applications and rich

More information

Data Center Projects: Establishing a Floor Plan

Data Center Projects: Establishing a Floor Plan Data Center Projects: Establishing a Floor Plan White Paper 144 Revision 2 by Neil Rasmussen and Wendy Torell > Executive summary A floor plan strongly affects the power density capability and electrical

More information

Hot Aisle vs. Cold Aisle Containment

Hot Aisle vs. Cold Aisle Containment Hot Aisle vs. Cold Aisle Containment By John Niemann White Paper #135 Executive Summary Both hot and cold air containment can significantly improve the predictability and efficiency of data center cooling

More information

Ten Cooling Solutions to Support High- Density Server Deployment

Ten Cooling Solutions to Support High- Density Server Deployment Ten Cooling Solutions to Support High- Density Server Deployment By Peter Hannaford White Paper #42 Revision 3 Executive Summary High-density servers offer a significant performance per watt benefit. However,

More information

OPEN RACK APPROACHES FOR MAXIMIZING THE EFFICIENCY OF EQUIPMENT IN A COLD- AISLE/HOT-AISLE DATA CENTER ENVIRONMENT By: Lars Larsen Product Manager -

OPEN RACK APPROACHES FOR MAXIMIZING THE EFFICIENCY OF EQUIPMENT IN A COLD- AISLE/HOT-AISLE DATA CENTER ENVIRONMENT By: Lars Larsen Product Manager - OPEN RACK APPROACHES FOR MAXIMIZING THE EFFICIENCY OF EQUIPMENT IN A COLD- AISLE/HOT-AISLE DATA CENTER ENVIRONMENT By: Lars Larsen Product Manager - Physical Support Ortronics/Legrand WHITE PAPER Open

More information

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001 Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001 Presented by: Liberty Engineering, LLP 1609 Connecticut Avenue

More information

The Advantages of Row and Rack- Oriented Cooling Architectures for Data Centers

The Advantages of Row and Rack- Oriented Cooling Architectures for Data Centers The Advantages of Row and Rack- Oriented Cooling Architectures for Data Centers By Kevin Dunlap Neil Rasmussen White Paper #130 Executive Summary Room cooling is an ineffective approach for next-generation

More information

Calculating Total Cooling Requirements for Data Centers

Calculating Total Cooling Requirements for Data Centers Calculating Total Cooling Requirements for Data Centers By Neil Rasmussen White Paper #25 Revision 2 Executive Summary This document describes how to estimate heat output from Information Technology equipment

More information

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

Unified Physical Infrastructure (UPI) Strategies for Thermal Management Unified Physical Infrastructure (UPI) Strategies for Thermal Management The Importance of Air Sealing Grommets to Improving Smart www.panduit.com WP-04 August 2008 Introduction One of the core issues affecting

More information

IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT

IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT DATA CENTER RESOURCES WHITE PAPER IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT BY: STEVE HAMBRUCH EXECUTIVE SUMMARY Data centers have experienced explosive growth in the last decade.

More information

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings WHITE PAPER Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings By Lars Strong, P.E., Upsite Technologies, Inc. Kenneth G. Brill, Upsite Technologies, Inc. 505.798.0200

More information

AC vs. DC Power Distribution for Data Centers

AC vs. DC Power Distribution for Data Centers AC vs. DC Power Distribution for Data Centers White Paper 63 Revision 6 by Neil Rasmussen > Executive summary DC power distribution has been proposed as an alternative to AC power distribution in data

More information

Power and Cooling for Ultra-High Density Racks and Blade Servers

Power and Cooling for Ultra-High Density Racks and Blade Servers Power and Cooling for Ultra-High Density Racks and Blade Servers By Neil Rasmussen White Paper #46 Revision 3 Executive Summary Rack power of 10kW per rack or more can result from the deployment of information

More information

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc. High Density Data Centers Fraught with Peril Richard A. Greco, Principal EYP Mission Critical Facilities, Inc. Microprocessors Trends Reprinted with the permission of The Uptime Institute from a white

More information

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers CURBING THE COST OF DATA CENTER COOLING Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers OVERVIEW Compare Cooling Strategies in Free- Standing and In-Building

More information

Guidelines for Specification of Data Center Power Density

Guidelines for Specification of Data Center Power Density Guidelines for Specification of Data Center Power Density White Paper 120 Revision 1 by Neil Rasmussen > Executive summary Conventional methods for specifying data center density are ambiguous and misleading.

More information

A White Paper from the Experts in Business-Critical Continuity TM. Data Center Cooling Assessments What They Can Do for You

A White Paper from the Experts in Business-Critical Continuity TM. Data Center Cooling Assessments What They Can Do for You A White Paper from the Experts in Business-Critical Continuity TM Data Center Cooling Assessments What They Can Do for You Executive Summary Managing data centers and IT facilities is becoming increasingly

More information

Data Center Components Overview

Data Center Components Overview Data Center Components Overview Power Power Outside Transformer Takes grid power and transforms it from 113KV to 480V Utility (grid) power Supply of high voltage power to the Data Center Electrical Room

More information

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Prepared for the U.S. Department of Energy Federal Energy Management Program By Lawrence Berkeley National Laboratory

More information

Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER

Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER Lylette Macdonald, RCDD Legrand Ortronics BICSI Baltimore 2011 Agenda: Discuss passive thermal management at the Rack

More information

Improving Data Center Energy Efficiency Through Environmental Optimization

Improving Data Center Energy Efficiency Through Environmental Optimization Improving Data Center Energy Efficiency Through Environmental Optimization How Fine-Tuning Humidity, Airflows, and Temperature Dramatically Cuts Cooling Costs William Seeber Stephen Seeber Mid Atlantic

More information

How to Meet 24 by Forever Cooling Demands of your Data Center

How to Meet 24 by Forever Cooling Demands of your Data Center W h i t e P a p e r How to Meet 24 by Forever Cooling Demands of your Data Center Three critical aspects that are important to the operation of computer facilities are matching IT expectations with the

More information

Analysis of the UNH Data Center Using CFD Modeling

Analysis of the UNH Data Center Using CFD Modeling Applied Math Modeling White Paper Analysis of the UNH Data Center Using CFD Modeling By Jamie Bemis, Dana Etherington, and Mike Osienski, Department of Mechanical Engineering, University of New Hampshire,

More information

- White Paper - Data Centre Cooling. Best Practice

- White Paper - Data Centre Cooling. Best Practice - White Paper - Data Centre Cooling Best Practice Release 2, April 2008 Contents INTRODUCTION... 3 1. AIR FLOW LEAKAGE... 3 2. PERFORATED TILES: NUMBER AND OPENING FACTOR... 4 3. PERFORATED TILES: WITH

More information

AC vs DC Power Distribution for Data Centers

AC vs DC Power Distribution for Data Centers AC vs DC Power Distribution for Data Centers By Neil Rasmussen White Paper #63 Revision 4 Executive Summary Various types of DC power distribution are examined as alternatives to AC distribution for data

More information

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers Power and Cooling Capacity for Data Centers By Neil Rasmussen White Paper #150 Executive Summary High density IT equipment stresses the power density capability of modern data centers. Installation and

More information

Comparing Data Center Power Distribution Architectures

Comparing Data Center Power Distribution Architectures Comparing Data Center Power Distribution Architectures White Paper 129 Revision 3 by Neil Rasmussen and Wendy Torell Executive summary Significant improvements in efficiency, power density, power monitoring,

More information

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management.

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management. AIMS is Malaysia and South East Asia s leading carrier neutral data centre operator and managed services provider. We provide international class data storage and ancillary services, augmented by an unrivaled

More information

Allocating Data Center Energy Costs and Carbon to IT Users

Allocating Data Center Energy Costs and Carbon to IT Users Allocating Data Center Energy Costs and Carbon to IT Users Paper 161 Revision 1 by Neil Rasmussen Executive summary Are complicated software and instrumentation needed to measure and allocate energy costs

More information

Specification of Modular Data Center Architecture

Specification of Modular Data Center Architecture Specification of Modular Data Center Architecture White Paper 160 Revision 1 by Neil Rasmussen > Executive summary There is a growing consensus that conventional legacy data center design will be superseded

More information

Best Practices for Wire-free Environmental Monitoring in the Data Center

Best Practices for Wire-free Environmental Monitoring in the Data Center White Paper 11800 Ridge Parkway Broomfiled, CO 80021 1-800-638-2638 http://www.42u.com sales@42u.com Best Practices for Wire-free Environmental Monitoring in the Data Center Introduction Monitoring for

More information

Element D Services Heating, Ventilating, and Air Conditioning

Element D Services Heating, Ventilating, and Air Conditioning PART 1 - GENERAL 1.01 OVERVIEW A. This section supplements Design Guideline Element D3041 on air handling distribution with specific criteria for projects involving design of a Data Center spaces B. Refer

More information

DATA CENTER RACK SYSTEMS: KEY CONSIDERATIONS IN TODAY S HIGH-DENSITY ENVIRONMENTS WHITEPAPER

DATA CENTER RACK SYSTEMS: KEY CONSIDERATIONS IN TODAY S HIGH-DENSITY ENVIRONMENTS WHITEPAPER DATA CENTER RACK SYSTEMS: KEY CONSIDERATIONS IN TODAY S HIGH-DENSITY ENVIRONMENTS WHITEPAPER EXECUTIVE SUMMARY Data center racks were once viewed as simple platforms in which to neatly stack equipment.

More information

Reducing Room-Level Bypass Airflow Creates Opportunities to Improve Cooling Capacity and Operating Costs

Reducing Room-Level Bypass Airflow Creates Opportunities to Improve Cooling Capacity and Operating Costs WHITE PAPER Reducing Room-Level Bypass Airflow Creates Opportunities to Improve Cooling Capacity and Operating Costs By Lars Strong, P.E., Upsite Technologies, Inc. 505.798.000 upsite.com Reducing Room-Level

More information

Use of the Signal Reference Grid in Data Centers

Use of the Signal Reference Grid in Data Centers Use of the Signal Reference Grid in Data Centers By Neil Rasmussen White Paper #87 Executive Summary Signal reference grids are automatically specified and installed in data centers despite the fact that

More information

DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences

DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no significant differences November 2011 Powered by DataCenter 2020: hot aisle and cold aisle containment efficiencies reveal no

More information

An Improved Architecture for High-Efficiency, High-Density Data Centers

An Improved Architecture for High-Efficiency, High-Density Data Centers An Improved Architecture for High-Efficiency, High-Density Data Centers By Neil Rasmussen White Paper #126 Executive Summary Data center power and cooling infrastructure worldwide wastes more than 60,000,000

More information

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions Data Centers Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions 1 Introduction The growing speed and footprint of data centers is challenging

More information

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management.

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management. AIMS is Malaysia and South East Asia s leading carrier neutral data centre operator and managed services provider. We provide international class data storage and ancillary services, augmented by an unrivaled

More information

Data Centers and Mission Critical Facilities Operations Procedures

Data Centers and Mission Critical Facilities Operations Procedures Planning & Facilities Data Centers and Mission Critical Facilities Operations Procedures Attachment A (Referenced in UW Information Technology Data Centers and Mission Critical Facilities Operations Policy)

More information

Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers

Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers CABINETS: ENCLOSED THERMAL MOUNTING MANAGEMENT SYSTEMS WHITE PAPER Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers 800-834-4969 techsupport@chatsworth.com www.chatsworth.com All products

More information

Dynamic Power Variations in Data Centers and Network Rooms

Dynamic Power Variations in Data Centers and Network Rooms Dynamic Power Variations in Data Centers and Network Rooms White Paper 43 Revision 3 by James Spitaels > Executive summary The power requirement required by data centers and network rooms varies on a minute

More information

The Pros and Cons of Modular Systems

The Pros and Cons of Modular Systems The Pros and Cons of Modular Systems Kevin Brown, Vice President Global Data Center Offer Schneider Electric Schneider Electric 1 Foundational BUSINESS OVERVIEW Rev 2 Desired characteristics of pre-fabricated

More information

Thermal Optimisation in the Data Centre

Thermal Optimisation in the Data Centre Thermal Optimisation in the Data Centre Best Practices for achieving optimal cooling performance and significant energy savings in your data centre 1. Thermal analysis 2. Cold-Aisle-Containment 3. Seal

More information

Data Center Temperature Rise During a Cooling System Outage

Data Center Temperature Rise During a Cooling System Outage Data Center Temperature Rise During a Cooling System Outage White Paper 179 Revision 1 By Paul Lin Simon Zhang Jim VanGilder > Executive summary The data center architecture and its IT load significantly

More information

Cooling Audit for Identifying Potential Cooling Problems in Data Centers

Cooling Audit for Identifying Potential Cooling Problems in Data Centers Cooling Audit for Identifying Potential Cooling Problems in Data Centers By Kevin Dunlap White Paper #40 Revision 2 Executive Summary The compaction of information technology equipment and simultaneous

More information

The Different Types of UPS Systems

The Different Types of UPS Systems The Different Types of UPS Systems By Neil Rasmussen White Paper #1 Revision 5 Executive Summary There is much confusion in the marketplace about the different types of UPS systems and their characteristics.

More information

Implementing Energy Efficient Data Centers

Implementing Energy Efficient Data Centers Implementing Energy Efficient Data Centers By Neil Rasmussen White Paper #114 Executive Summary Electricity usage costs have become an increasing fraction of the total cost of ownership (TCO) for data

More information

Data Center Operating Cost Savings Realized by Air Flow Management and Increased Rack Inlet Temperatures

Data Center Operating Cost Savings Realized by Air Flow Management and Increased Rack Inlet Temperatures Data Center Operating Cost Savings Realized by Air Flow Management and Increased Rack Inlet Temperatures William Seeber Stephen Seeber Mid Atlantic Infrared Services, Inc. 5309 Mohican Road Bethesda, MD

More information

Data Center Airflow Management Retrofit

Data Center Airflow Management Retrofit FEDERAL ENERGY MANAGEMENT PROGRAM Data Center Airflow Management Retrofit Technology Case Study Bulletin: September 2010 Figure 1 Figure 2 Figure 1: Data center CFD model of return airflow short circuit

More information

Data Center Rack Systems Key to Business-Critical Continuity. A White Paper from the Experts in Business-Critical Continuity TM

Data Center Rack Systems Key to Business-Critical Continuity. A White Paper from the Experts in Business-Critical Continuity TM Data Center Rack Systems Key to Business-Critical Continuity A White Paper from the Experts in Business-Critical Continuity TM Executive Summary At one time, data center rack enclosures and related equipment

More information

Data Center Temperature Rise During a Cooling System Outage

Data Center Temperature Rise During a Cooling System Outage Data Center Temperature Rise During a Cooling System Outage White Paper 179 Revision 0 By Paul Lin Simon Zhang Jim VanGilder > Executive summary The data center architecture and its IT load significantly

More information

International Telecommunication Union SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF TELECOMMUNICATION CABLES IN PUBLIC NETWORKS

International Telecommunication Union SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF TELECOMMUNICATION CABLES IN PUBLIC NETWORKS International Telecommunication Union ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU Technical Paper (13 December 2013) SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF TELECOMMUNICATION CABLES

More information

Data Center Energy Profiler Questions Checklist

Data Center Energy Profiler Questions Checklist Data Center Energy Profiler Questions Checklist Step 1 Case Name Date Center Company State/Region County Floor Area Data Center Space Floor Area Non Data Center Space Floor Area Data Center Support Space

More information

Fundamentals of CFD and Data Center Cooling Amir Radmehr, Ph.D. Innovative Research, Inc. radmehr@inres.com

Fundamentals of CFD and Data Center Cooling Amir Radmehr, Ph.D. Innovative Research, Inc. radmehr@inres.com Minneapolis Symposium September 30 th, 2015 Fundamentals of CFD and Data Center Cooling Amir Radmehr, Ph.D. Innovative Research, Inc. radmehr@inres.com Learning Objectives 1. Gain familiarity with Computational

More information

Hot-Aisle vs. Cold-Aisle Containment for Data Centers

Hot-Aisle vs. Cold-Aisle Containment for Data Centers Hot-Aisle vs. Cold-Aisle Containment for Data Centers White Paper 135 Revision 1 by John Niemann, Kevin Brown, and Victor Avelar > Executive summary Both hot-air and cold-air containment can improve the

More information

Cooling Strategies for IT Wiring Closets and Small Rooms

Cooling Strategies for IT Wiring Closets and Small Rooms Cooling Strategies for IT Wiring Closets and Small Rooms By Neil Rasmussen Brian Standley White Paper #68 Executive Summary Cooling for IT wiring closets is rarely planned and typically only implemented

More information

Environmental Data Center Management and Monitoring

Environmental Data Center Management and Monitoring 2013 Raritan Inc. Table of Contents Introduction Page 3 Sensor Design Considerations Page 3 Temperature and Humidity Sensors Page 4 Airflow Sensor Page 6 Differential Air Pressure Sensor Page 6 Water Sensor

More information

Virtual Data Centre Design A blueprint for success

Virtual Data Centre Design A blueprint for success Virtual Data Centre Design A blueprint for success IT has become the back bone of every business. Advances in computing have resulted in economies of scale, allowing large companies to integrate business

More information

OCTOBER 2010. Layer Zero, the Infrastructure Layer, and High-Performance Data Centers

OCTOBER 2010. Layer Zero, the Infrastructure Layer, and High-Performance Data Centers Layer Zero, the Infrastructure Layer, and The demands on data centers and networks today are growing dramatically, with no end in sight. Virtualization is increasing the load on servers. Connection speeds

More information

Center Thermal Management Can Live Together

Center Thermal Management Can Live Together Network Management age e and Data Center Thermal Management Can Live Together Ian Seaton Chatsworth Products, Inc. Benefits of Hot Air Isolation 1. Eliminates reliance on thermostat 2. Eliminates hot spots

More information