Data Center Cabinet Dynamics Understanding Server Cabinet Thermal, Power and Cable Management

Similar documents
Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Data center upgrade proposal. (phase one)

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

High Density Data Centers Fraught with Peril. Richard A. Greco, Principal EYP Mission Critical Facilities, Inc.

Power and Cooling for Ultra-High Density Racks and Blade Servers

Choosing Close-Coupled IT Cooling Solutions

APC APPLICATION NOTE #112

Benefits of. Air Flow Management. Data Center

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

Great Lakes Data Room Case Study

Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER

IT White Paper MANAGING EXTREME HEAT: COOLING STRATEGIES FOR HIGH-DENSITY SYSTEMS

Benefits of Cold Aisle Containment During Cooling Failure

Managing Data Centre Heat Issues

Data Center Rack Systems Key to Business-Critical Continuity. A White Paper from the Experts in Business-Critical Continuity TM

Element D Services Heating, Ventilating, and Air Conditioning

How to Meet 24 by Forever Cooling Demands of your Data Center

Driving Data Center Efficiency Through the Adoption of Best Practices

Ten Steps to Solving Cooling Problems Caused by High- Density Server Deployment

BRUNS-PAK Presents MARK S. EVANKO, Principal

Data Center Components Overview

Cooling Audit for Identifying Potential Cooling Problems in Data Centers

- White Paper - Data Centre Cooling. Best Practice

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

How To Improve Energy Efficiency Through Raising Inlet Temperatures

Data Center Power Consumption

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Cable management for rack-mounted systems

Strategies for Deploying Blade Servers in Existing Data Centers

Rittal Liquid Cooling Series

How To Cool A Data Center

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

Overview & Design of Data Center Cabinets

WHITE PAPER. Creating the Green Data Center. Simple Measures to Reduce Energy Consumption

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

OCTOBER Layer Zero, the Infrastructure Layer, and High-Performance Data Centers

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers

DATA CENTER RACK SYSTEMS: KEY CONSIDERATIONS IN TODAY S HIGH-DENSITY ENVIRONMENTS WHITEPAPER

Blade Servers and Beyond: Adaptive Cooling for the Next Generation of IT Systems. A White Paper from the Experts in Business-Critical Continuity

Small Data / Telecommunications Room on Slab Floor

Data Center Equipment Power Trends

The New Data Center Cooling Paradigm The Tiered Approach

Managing Cooling Capacity & Redundancy In Data Centers Today

Layer Zero. for the data center

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

Energy-efficient & scalable data center infrastructure design

How To Power And Cool A Data Center

ADC-APC Integrated Cisco Data Center Solutions

Sealing Gaps Under IT Racks: CFD Analysis Reveals Significant Savings Potential

Selecting Rack-Mount Power Distribution Units For High-Efficiency Data Centers

Enclosure and Airflow Management Solution

Airflow Simulation Solves Data Centre Cooling Problem

Layer Zero. for the data center

Rack Hygiene. Data Center White Paper. Executive Summary

How To Run A Data Center Efficiently

Data Bulletin. Mounting Variable Frequency Drives in Electrical Enclosures Thermal Concerns OVERVIEW WHY VARIABLE FREQUENCY DRIVES THERMAL MANAGEMENT?

Using Simulation to Improve Data Center Efficiency

APC APPLICATION NOTE #92

Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms

Cabinets 101: Configuring A Network Or Server Cabinet

GREEN FIELD DATA CENTER DESIGN WATER COOLING FOR MAXIMUM EFFICIENCY. Shlomo Novotny, Vice President and Chief Technology Officer, Vette Corp.

Liquid Cooling Solutions for DATA CENTERS - R.M.IYENGAR BLUESTAR LIMITED.

Free Cooling in Data Centers. John Speck, RCDD, DCDC JFC Solutions

Cabinet V800 V800 CABINETS

Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc.

Data Center Rack Level Cooling Utilizing Water-Cooled, Passive Rear Door Heat Exchangers (RDHx) as a Cost Effective Alternative to CRAH Air Cooling

Using CFD for optimal thermal management and cooling design in data centers

Improving Data Center Energy Efficiency Through Environmental Optimization

Verizon SMARTS Data Center Design Phase 1 Conceptual Study Report Ms. Leah Zabarenko Verizon Business 2606A Carsins Run Road Aberdeen, MD 21001

Creating Order from Chaos in Data Centers and Server Rooms

TIA-942 Data Centre Standards Overview WHITE PAPER

Elements of Energy Efficiency in Data Centre Cooling Architecture

Power and Cooling Guidelines for Deploying IT in Colocation Data Centers

Access Server Rack Cabinet Compatibility Guide

Understanding How Cabinet Door Perforation Impacts Airflow by Travis North

White Paper Rack climate control in data centres

Data Center Cooling & Air Flow Management. Arnold Murphy, CDCEP, CDCAP March 3, 2015

Thermal Optimisation in the Data Centre

Creating Order from Chaos in Data Centers and Server Rooms

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Reducing Data Center Energy Consumption

How Row-based Data Center Cooling Works

Applying Proper Cable Management In IT Racks A Guide For Planning, Deployment And Growth

Reducing Room-Level Bypass Airflow Creates Opportunities to Improve Cooling Capacity and Operating Costs

News in Data Center Cooling

Glossary of Heating, Ventilation and Air Conditioning Terms

Server Platform Optimized for Data Centers

IMPROVING DATA CENTER EFFICIENCY AND CAPACITY WITH AISLE CONTAINMENT

Five Strategies for Cutting Data Center Energy Costs Through Enhanced Cooling Efficiency

2006 APC corporation. Cooling Solutions and Selling Strategies for Wiring Closets and Small IT Rooms

IT Cost Savings Audit. Information Technology Report. Prepared for XYZ Inc. November 2009

Power Distribution Systems for the Dell PowerEdge M1000e Modular Server Enclosure - Selection and Installation

Transcription:

Data Center Cabinet Dynamics Understanding Server Cabinet Thermal, Power and Cable Management By Brian Mordick, RCDD Senior Product Manager Hoffman Management strategies for: Thermal Management CABLE Management POWER Management

Thermal CABLE POWER Data Center Cabinet Dynamics Summary: Today s IT professionals face many challenges in running an efficient data center, whether it is maintaining current installations or planning for future applications. They must protect the productivity of their company s network end-to-end and research the latest technologies as networking requirements evolve. To ensure the proper IT systems environment, it is essential to consider thermal, power and cable management in today s server cabinets. IT professionals put significant emphasis on protecting communications equipment from potential outside threats. Meanwhile, increasing thermal densities, power shortages and fluctuations, and poor cable management may be compromising system operations or destroying the equipment from the inside. In a recent survey, data center managers indicated they are concerned about the following issues. Top Concerns of Data Center Managers Chart 1 Reference: Data Center User s Group Conference, The Adaptive Data Center: Managing Dynamic Technologies Used with permission

2 2 Thermal CABLE POWER Data Center Cabinet Dynamics Securing Your Network Against the Dangers of Overheating IT professionals take all the necessary precautions to ensure that computer networks and communications equipment are secure and protected. Locks, firewalls, passwords and other protection protocols are in place but an invisible enemy lurks within and could wreak havoc on the carefully configured and guarded systems. Advances in technology allow equipment to become faster and more compact, but there are consequences: increased thermal densities. Some industry executives predict that at the current growth rate, thermal heat densities could reach nuclear proportions within a decade if unchecked. Understanding how to temper those densities is becoming increasingly critical to ensure system reliability and availability. Heat load per product footprint - watts/ft (frames) Communication Equipment Servers & Disk Storage Systems Workstations (standalone) Tape Storage Systems Year of First Product Announcement 2000-2006 The Uptime Insititute, Inc. Version 1.2 (1.8-2.2m tall) Year of First Product Shipment As equipment heats up, performance slows and productivity drops. It can happen at any time and can be directly attributed to heat buildup in and around electronic equipment. Many companies don t realize that excessive heat shortens the life of electronic equipment and can even shut it down permanently. Heat may be invisible, but its effects are devastating and costly. According to the Uptime Institute, for every 18 degrees Fahrenheit (10 degrees Celsius) that internal cabinet temperatures rise above normal room temperature, the life expectancy of the enclosed electronics drops by 50 percent. Heat load per product footprint - watts/m Blade Servers Impact Blade servers are the latest in high-density network equipment. They use a common chassis and provide slots for blades to be installed. These new levels of power density dramatically increase thermal loads. A single blade server with all slots filled and running at capacity can produce more than 3 kilowatts of heat. Theoretically, a cabinet filled with blade servers (seven or eight chassis) can produce 21 to 24 kilowatts of heat. Although blade servers represent less than 10 percent of overall server sales, they are growing rapidly and likely to become the industry norm within the next few years. This represents significant challenges to thermal and power management. How am I going to get that much power to my servers, and how will I get rid of all the heat? is a common sentiment expressed by most data center managers.

Understanding Server Cabinet Thermal, Power, and Cable Management Current Practices May Not Be Working When it comes to protecting data center servers, IT professionals should think inside the box and select data cabinets that are not only well built but also help manage heat buildup. Thermal management is a growing concern, because many existing data centers weren t built to handle the thermal densities of next-generation blade servers and networking equipment. For every 18 degrees Fahrenheit (10 degrees Celsius) that internal cabinet temperatures rise above normal room temperature, the life expectancy of the enclosed electronics drops by 50 percent. Uptime Institute Many organizations believe the answer is simple: Cool the ambient air to lower the inside cabinet temperature. While this approach seems logical, it is problematic. Issues still present are: Continued hot spots and overheating. Massive increases in energy costs. Recirculation air flows are not addressed. Using very cold air flows can cause condensation, leading to corrosion, equipment failure, poor or intermittent contacts, thermal expansion or contraction failures, etc. The best way to measure the amount of heat produced in a cabinet is to measure the Figure 1 This computer- generated image illustrates heat buildup in the upper portions of this data cabinet. power being consumed. Every watt of power consumed nearly equals every watt of heat produced. The key to keeping equipment cool is channeling or ducting cool air into the equipment and providing a path for the heated air to escape out of the cabinet. Power Consumption Considerations Are Significant Power management is equally as important as thermal management. As power density requirements continue to climb, data center managers are increasingly asking, How do I get the power to and distributed within the cabinet? In addition, there is a direct relationship between power used and heat generated. Power, defined as voltage x current, is expressed in terms of watts (w) or kilowatts (kw) (1,000 watts). Watts cooling is also the expression used when discussing cooling capacity. The connection is simple: Power in = heat out. 3

Thermal CABLE POWER Data Center Cabinet Dynamics The amount of power required provides a direct relationship to the amount of heat generated and the cooling capacity required. For example, power in = voltage x current (amps) e.g. 208 vac x 30A (amps) = 6,240 watts or 6.24 kw. In the design stage, before the cabinet is put into place and power is measured, the amount of power required and the amount of heat generated can be estimated by taking a power and the resulting increase in heat impact a data centers capacity to service customers. This level of power demand changes the way power is distributed inside the cabinet. Where a basic 15A power strip with multiple outlets was required, a three-phase 208 vac capable of more than 16.6 kilowatts of power provided by a PDU (Power Distribution Unit) is now needed to handle greater power demands. Power in = heat out power in = voltage x current (amps) Example: 208 vac x 30A (amps) = 6,240 watts or 6.24 kw. percentage of the Name Plate power that is stated on the equipment. Network equipment is required by UL and other agencies to list the equipment s power requirements. Since this rating accounts for the maximum power that can be consumed by the power supply, only a percentage of this should be used. Typically, power supplies are designed to provide many times the power output than the network equipment actually needs. Using 50 to 75 percent of the Name Plate power provides a good estimate for calculating the amount of heat the cabinet will produce. It should be noted that it takes more power to cool than to heat. While network equipment readily converts its power usage to heat, e.g. 5,000 watts of power in produces 5,000 watts of heat, cooling systems do not. Five thousand watts of cooling could require 10,000 watts or more of power. What causes the rapid increase in power and thermal loads? When a cabinet is filled with blade servers, the average power consumption of that cabinet can increase from 1,500 watts to more than 20,000 watts (20 kilowatts). This increase in The solution seems simple: ensure that the data center is capable of providing 20 kilowatts or more of redundant power and cooling to every enclosure. While that may seem easy, it s not always economical, practical or even technically possible because of upfront infrastructure capital cost and ongoing operational costs for the life of the data center. The capital cost to provide this level of thermal and power service is typically beyond the reach of many companies, because even though they are dependent on their data centers and the services they provide, companies are forced to make compromises due to budget realities. A Brief Look at Thermal Basics Network equipment requires a stream of cool air to continually run via convection. There are only two components that a data center manager can manipulate to dissipate the heat generated inside the cabinet: the amount of air and the data center temperatures. The very best designed data center typically can provide air temperatures around 55 degrees Fahrenheit, thus a T (in F) of about 45 degrees Fahrenheit. 4

Understanding Server Cabinet Thermal, Power, and Cable Management As cooling strategies become more complex, the resulting increase in the number of components and their potential failure can result in rapid temperature rise in the cabinet in as little as 5 to 10 minutes. Choosing the best thermal and power management solution is essential to help facilitate optimal component speed and processing power in your data center without sacrificing reliability and performance. Cabinet Design s Role in Heat Dispensation Cabinets can be designed with features that facilitate heat dispensation and be placed in a data center to define specific thermal zones for air intake and exhaust to create maximum cooling efficiencies. Hoffman has tested several cabinet configurations to determine how cabinet design and data center placement can maximize heat dispensation and established best practices for keeping electronic equipment cool and reliable. Passive Cooling versus Active Cooling Passive cooling uses louvers, vents and perforated panels, along with the equipments fans, to exchange ambient air. Active cooling uses cabinet venting fans to exhaust hot air and can be used in conjunction with piped-in chilled air. Critical Formulas For Thermal Management Watts (power) = voltage x current (amperes) = Watts (heat load) Watts (thermal convection cooling) =.316 x CFM x T (in F) or CFM = Watts (cooling) /.316 x T (in F) or T (in F) = Watts (cooling) /(.316 x CFM) This equation can be manipulated to solve any of the three variables: Watts (cooling), CFM or T (in F), and is invaluable in the design and operation of a data center. CFM = cubic feet per minute (quantity of air and its velocity) T (in F) = Delta T (the difference between the coolest air (55 F) and the maximum allowable temperature (95 F). Example: 10 kw of heat load in a typical data center with a (30 T ) will need 1,055 CFM BTUs (British thermal units) = Watts cooling x 3.413 Example: 10 kw cooling = 34,130 BTUs 5

Thermal CABLE POWER Data Center Cabinet Dynamics Hot Aisle/Cold Aisle Data Center Layout A hot aisle/cold aisle data center layout has specific hot and cold areas. Computer room air conditioners (CRAC) are placed strategically to create cold aisles. The cabinets on both sides of those aisles have network equipment installed that draws the cold air through the cabinet fronts and into its intakes. The equipment exhaust exits through the cabinet rear, creating hot aisles that alternate with the cold aisles. The hot air is then re-circulated to the CRAC unit. This airflow management strategy addresses adverse equipment airflow, preventing equipment exhaust from being drawn into other equipment intakes. This type of data center layout has been universally accepted and is being actively deployed in most data centers. Three types of hot aisle/cold aisle cabinet designs are: Hot Aisle/Cold Aisle Configuration, Passive Cooling When hot aisle/cold aisle data center cabinet positioning is implemented and heat buildup is 1,500 to 2,000 watts, passive cooling can be utilized. In this configuration, cold air is pulled from the floor to cool equipment as it moves from the front to the back of the cabinet. The resulting warm air is then exhausted out the cabinet top and back. Hot Aisle/Cold Aisle Configuration, Active Cooling Hot aisle/cold aisle cabinet configurations in conjunction with active cooling are the most efficient cooling solutions for components with heat dispensation levels ranging from 4,000 to 6,000 watts. Cabinets that have a perforated front and a rear fan door are the most efficient for this type of application. Hot Aisle/Cold Aisle Configuration, Active Cooling with Floor Ducting Hot aisle/cold aisle cabinet configurations in conjunction with active cooling plus floor ducting will help manage heat buildup when heat dispensation levels reach 6,000 to 10,000 watts. The most effective cabinets for these applications have a front window door, a rear fan door and a floor-ducted base with plenum front. 6

Understanding Server Cabinet Thermal, Power, and Cable Management Random Data Center Layout The random data center layout is typically associated with older or legacy data centers, where the entire room is cooled with no specific hot or cold area strategies. In many cases, data center managers do not have the capital to upgrade the data centers to more efficient designs, but they still need to increase the cabinets thermal density. Layout Summary Air-cooling continues to be the most economical means of dissipating heat. All commercially available servers continue to use airflow to dissipate heat out of the equipment (cold intake air from the front while exhausting hot air out the back). Careful consideration should be taken to determine the best cabinet configuration for your data center. Two types of legacy systems are: Random Configuration, Passive Cooling When a data center has random cabinet positioning and a relatively low heat dispensation volume of 1,000 to 2,000 watts, passive cooling will manage heat buildup. Cabinets that have a perforated front, rear and top perform the most efficiently in this type of application. Random Configuration, Active Cooling As heat loads increase to a range of 2,000 to 3,000 watts in random cabinet positioning data centers, active cooling can be employed. The cabinets used in this type of application have a perforated front, a louvered lower-one-third rear door and a top fan. Legacy data centers typically use this type of configuration to increase thermal densities without incurring costly facility reconstruction. 7

Thermal CABLE POWER Data Center Cabinet Dynamics Data Center Design Considerations The Borrowed Cooling Option When determining the placement of highdensity cabinets into a data center, there are several practical and effective strategies. Utilization of Load Spreading The most popular solution for incorporating high-density equipment into many of today s data centers is load spreading. When the power required and heat generated by the equipment inside a cabinet exceeds the cabinet s cooling capacity, installing the equipment in multiple cabinets, or supplemental spreading the load, more evenly distributes the power and cooling demands between cabinets. Within the data center many 1U servers and blade servers do not need to be installed in the same cabinet and can be spread out across multiple cabinets. Load spreading can be a good option, because it may be less costly to enlarge or expand a data center than to add complex supplemental cooling systems. A careful analysis of real estate, power, technical labor force, connectivity and other costs needs to be conducted in order to make proper decisions. It should be noted that spreading equipment among multiple cabinets can result in a sizable amount of unused vertical space within each cabinet. The unused space must be filled with blanking panels to prevent hot air recirculation, which reduces cooling performance. Load spreading can also cause data cabling issues. Proper cable management techniques will be discussed later in this paper....it may be less costly to enlarge or expand a data center than to add complex cooling systems. A careful analysis needs to be conducted to make proper decisions. When borrowed cooling is utilized, cabinets containing low heat producing equipment are strategically placed throughout the data center next to cabinets containing high heat generating equipment. This enables the higher heat load cabinets to use, or borrow, the adjacent cabinet s unused cooling capacity. This cooling option can reliably and predictably enable cabinets to be cooled to more than twice their average design value. Cabinet heat capacity rules can be established with compliance verified through power consumption monitoring. However, many IT professionals find that this cooling method requires them to enforce complex rules, occupy more floor space and limits them to about twice the design power density. Implications of Liquid Cooling Another solution for removing excessive heat loads from data center cabinets is liquid cooling. Liquid cooling solutions are either water or refrigerant based. Many IT professionals are hesitant to use water in data centers because of leakage. Also, moving cooling pipes, tubes or hoses requires time and money, thus making moves, adds and changes (MACs) a challenge. Liquid cooling systems operate similar to a heat exchanger, but supply chilled liquid instead of cold air, to the system. The cabinet heat transfers to the liquid, which is then piped out to be reconditioned (chilled back down). The systems must be leakproof, reliable, 8

Understanding Server Cabinet Thermal, Power, and Cable Management expandable and flexible enough to allow easy reconfiguration in a data center space. The following should be considered before installing a liquid cooling solution: Liquid supply lines and warm water return lines need to be installed. Pipe runs must not interfere with already installed connectivity or power cables. Future flexibility can be limited. Every threaded or welded fitting presents a potential leak; pipe runs need to be reviewed for condensation. Additional electrical circuits are required. Multiple independent systems will be needed to provide redundancy or backup systems, which are required in most data centers. Future MACs can be more costly. In applications of extreme heat, when spreading the load and increasing the size of the data center aren t possible, liquid cooling solutions can be an alternative. However, facility design considerations must be fully understood. Challenges of a Dedicated High-Density Area When power density exceeds 10 kilowatts per cabinet, unpredictable airflow is a problem. To remedy this, the airflow path between the cooling system and the cabinet must be shortened. Creating a special high-density row or zone in a section of the data center, cooled with the center s CRAC, is a solution. This approach is likely temporary though, due to data center Thermal Management Best Practices: Avoid restricted, cascading and short circuited airflows. Install blanking panels in all unused rack spaces. Neatly rout cables to prevent air restrictions. Take a holistic approach to the data center (raised floor, CRAC units, cabinets, etc.). Avoid the use of cable support arms and slide outs that may restrict airflows. Spread the load to the available spaces (cabinets). Strategically locate low and high heat loaded cabinets within the data center. Create special high heat zones within the data center. Consider the addition of a supplemental (liquid) cooling system. Increase the size of the data center (new addition or building). Adopt hot aisle / cold aisle cabinet layout. Avoid large temperature swings thermal expansion and condensation issues. Avoid temperatures below the dew point (condensation). Strategically place CRAC units to provide airflow to aisles. Position perforated tiles to uniformly provide cold air to equipment aisle.

Thermal CABLE POWER Data Center Cabinet Dynamics growth and change. Cabinet density must also be predictable or known in order to determine power and cooling requirements. Design Wrap-up It is important to remember that a cabinet, no matter what the design, cannot make up for insufficient total cooling within the data center. A cabinet using fans, deflectors, blocking plates or any other similar devices can never cool itself below the surrounding ambient air temperature, however, it can improve the efficiency of heat movement in the data center by controlling intake and exhaust airflows. Increased heat dissipation requires greater complexity and integration of the entire data center such as raised floor, CRAC, cabinets, etc. Importance of Proper Cable Management Deploying thermal and power management solutions should not be viewed as the only ways to maintain an efficient data center. Checking for cable performance is as important as tending to overheated equipment or increased...effective cable management is not just about appearances. power loads. To maintain the quality of vital information exchanged in today s data rooms, IT professionals must properly manage cables and cords. As unsettling as it may be for IT professionals to see a cluttered mass of cable spaghetti, effective cable management is not just about appearances. Improper cable management can lead to serious consequences: Nicks, stretching and twisting cable can affect the signal quality and also the network speed. Employ Cable Management Best Practices As the number of IT components continues to increase inside a cabinet, so does the number of power and data cables. The care and attention given to cables during installation and ongoing changes are the main factors in maintaining high-quality network performance. Consider the following checklist to ensure proper cable management: Run cables overhead or below whenever possible to provide easy access. Install proper cable management supports. (Most manufacturers have several cable management offerings.) Consolidate cable bundles with Velcro straps, using low to moderate pressure. This can prevent cable damage associated with traditional metal rings. Keep copper and fiber-optic cables on separate runs so the weight of the copper does not impact the fiber. Avoid kinks and sharp bends in cables by using waterfall and cable spool devices. Spools can be especially effective with fiber for maintaining proper bend requirements and controlling slack. Make sure that when cables run through metal openings there are protective grommets and edging. Separate power, Data (copper) and Data (fiber) from each other. 10

Understanding Server Cabinet Thermal, Power, and Cable Management Cables in the rear of a cabinet can block airflow and increase the temperature inside a cabinet. Sharp changes in direction can change the electrical properties of the cable by changing the cable size and twist rate. Cross Sectional Area Cable Fill Rates Cable Fill Rate 40% Cable Fill Rate 60% Cable Fill Rate 80% Cable Type (CAT) 5e 6 6a 5e 6 6a 5e 6 6a Diameter Inches 0.22 0.28 0.35 0.22 0.28 0.35 0.22 0.28 0.35 PROLINE PVCM 50mm 6.220 65 40 26 98 61 39 131 81 52 100mm 12.920 136 84 54 204 126 81 272 168 107 X50mm 10.870 114 71 45 172 106 68 229 141 90 X100mm 22.960 242 149 95 363 224 143 483 298 191 PROLINE PVCMTD 3.00 x 4.00 12.000 126 78 50 189 117 75 253 156 100 PROLINE PRBTD* 50mm (1.91 x 4.00) 7.640 80 50 32 121 74 48 161 99 64 100mm (3.88 x 4.00) 15.520 163 101 65 245 151 97 327 202 129 PROLINE PRBF 50mm (1.60 x 5.25) 8.400 88 55 35 133 82 52 177 109 70 100mm (3.57 x 5.25) 18.700 197 121 78 295 182 117 394 243 156 Tie Wraps Tie Wrap 8 2.400 N/A N/A N/A N/A N/A N/A 51 31 20 Tie Wrap 12 6.000 N/A N/A N/A N/A N/A N/A 126 78 50 D-Ring Large 9.440 99 61 39 149 92 59 199 123 79 Small 3.500 37 23 15 55 34 22 74 45 29 11

Thermal CABLE POWER Data Center Cabinet Dynamics Cable issues can increase the time required to trace a cable during a MAC in the cabinet or rack. Finding the Best Thermal, Power and Cable Solution for Your Data Center As new technologies arise and the demand for more performance from computer equipment in data centers increases, IT professionals must constantly research best practices for maintaining power consumption, high levels of heat and an abundance of cables. There is a full range of cabinet features and designs that can be combined with your facility s data center layout to effectively mitigate heat generated by network equipment, power consumption and effective cable management. Thinking inside the box and finding the solutions for these areas can help facilitate optimal component speed and processing power without sacrificing reliability and performance. For more information on thermal, power and cable management, visit www.hoffmanonline.com. About the Author Brian L. Mordick, RCDD, Senior Product Manager, Hoffman Brian Mordick is a Senior Product Manager at Hoffman, with special expertise in datacom, thermal and seismic issues. While developing various types of enclosures during the last 17 years, he s incorporated innovation into new enclosure designs and holds several patents. His engineering background and knowledge of the Information Technology industry made him an integral part of the development of the Data and Communication product platforms at Hoffman. Mordick is a graduate of the University of Wisconsin Stout, a member of the BICSI, and Registered Communication Distribution Designer (RCDD). He has frequently contributed to articles regarding enclosure trends and electronics and is active in the industry as a public speaker. Recent presentations include: Thermal Management, BICSI, July 2006; EMC, BICSI, May 2004; Seismic Compatibility of Network Racks & Cabinets, BICSI, May 2002; Thermal management of Network equipment, BICSI, Jan 2002; Data Communications Racks and Cabinets, BICSI, Sept. 2001 12

13

Hoffman 2100 Hoffman Way Anoka, Minnesota 55303-1745 U.S.A. Phone: 763-421-2240 Fax: 763-422-2178 Customer Service: 763-422-2661 http://www.ehoffman.com Canada Hoffman 111 Grangeway Avenue, Suite 504 Scarborough, Ontario MIH 3E9 Phone: 416-289-2770 Fax: 416-289-2883 1-800-668-2500 (Canada only) Mexico Pentair Enclosures, S. de R. L. de C. V. Federico T. de la Chica, No. 8 Piso-4A Cd. Satelite, Naucalpan, Mexico C.P. 53100 Tel: (55) 5393-9005 ext. 222 Fax: (55) 5393-8827 For additional international locations see www.hoffmanonline.com/international WP-00001 Rev. A 09/06