Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers



Similar documents
Selecting Rack-Mount Power Distribution Units For High-Efficiency Data Centers

Cabinet & Enclosure Systems

Data center upgrade proposal. (phase one)

Unified Physical Infrastructure SM (UPI) Strategies for Smart Data Centers

Dealing with Thermal Issues in Data Center Universal Aisle Containment

Unified Physical Infrastructure (UPI) Strategies for Thermal Management

Data Center Rack Systems Key to Business-Critical Continuity. A White Paper from the Experts in Business-Critical Continuity TM

Managing Data Centre Heat Issues

Rack Hygiene. Data Center White Paper. Executive Summary

OCTOBER Layer Zero, the Infrastructure Layer, and High-Performance Data Centers

Optimizing Network Performance through PASSIVE AIR FLOW MANAGEMENT IN THE DATA CENTER

Benefits of. Air Flow Management. Data Center

2006 APC corporation. Cooling Solutions and Selling Strategies for Wiring Closets and Small IT Rooms

ADC-APC Integrated Cisco Data Center Solutions

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

Great Lakes Data Room Case Study

Data Center Solutions

DATA CENTER RACK SYSTEMS: KEY CONSIDERATIONS IN TODAY S HIGH-DENSITY ENVIRONMENTS WHITEPAPER

Extend the Life of Your Data Center By Ian Seaton Global Technology Manager

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

How Row-based Data Center Cooling Works

Improving Rack Cooling Performance Using Airflow Management Blanking Panels

Access Server Rack Cabinet Compatibility Guide

Energy-efficient & scalable data center infrastructure design

Airflow Simulation Solves Data Centre Cooling Problem

APC APPLICATION NOTE #92

GUIDE TO ICT SERVER ROOM ENERGY EFFICIENCY. Public Sector ICT Special Working Group

Cabinet V800 V800 CABINETS

Sealing Gaps Under IT Racks: CFD Analysis Reveals Significant Savings Potential

Applying Proper Cable Management In IT Racks A Guide For Planning, Deployment And Growth

Power and Cooling for Ultra-High Density Racks and Blade Servers

Rack Blanking Panels To Fill or Not to Fill

F-SERIES TERAFRAME HD CABINET SYSTEM

APC APPLICATION NOTE #112

SERVER ROOM CABINET INSTALLATION CONCEPTS

Introducing Computational Fluid Dynamics Virtual Facility 6SigmaDC

CANNON T4 MINI / MICRO DATA CENTRE SYSTEMS

Green Data Centre: Is There Such A Thing? Dr. T. C. Tan Distinguished Member, CommScope Labs

Enclosure and Airflow Management Solution

InstaPATCH ZERO Solution Guide

Hot Air Isolation Cools High-Density Data Centers By: Ian Seaton, Technology Marketing Manager, Chatsworth Products, Inc.

Air, Fluid Flow, and Thermal Simulation of Data Centers with Autodesk Revit 2013 and Autodesk BIM 360

InstaPATCH Cu Cabling Solutions for Data Centers

Best Practices for Wire-free Environmental Monitoring in the Data Center

Technology Corporation

Ten Steps to Solving Cooling Problems Caused by High- Density Server Deployment

Racks & Enclosures. Deliver. Design. Define. Three simple words that clearly describe how we approach a Data Center project.

Data Center Components Overview

Cable Management Solutions

Small Data / Telecommunications Room on Slab Floor

Layer Zero. for the data center

Airflow Management Solutions

Rack Installation Instructions

Creating Data Center Efficiencies Using Closed-Loop Design Brent Goren, Data Center Consultant

Using Simulation to Improve Data Center Efficiency

BRUNS-PAK Presents MARK S. EVANKO, Principal

CURBING THE COST OF DATA CENTER COOLING. Charles B. Kensky, PE, LEED AP BD+C, CEA Executive Vice President Bala Consulting Engineers

Cabinets 101: Configuring A Network Or Server Cabinet

Containment Solutions

Cooling a hot issue? MINKELS COLD CORRIDOR TM SOLUTION

Free-Stand Cabinets PROLINE Network Switch Cabinets

Data Center Cooling: Fend Off The Phantom Meltdown Of Mass Destruction. 670 Deer Road n Cherry Hill, NJ n n

CASE STUDY. How the Lego Approach Helped Norway Giant SpareBank 1 Create a Flexible Data Centre. They used what they called the Lego Approach.

Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Data Center Solutions

Using a Fabric Extender with a Cisco Nexus 5000 Series Switch

Energy Performance Optimization of Server Room HVAC System

Airflow Management Solutions

Element D Services Heating, Ventilating, and Air Conditioning

RACKMOUNT SOLUTIONS. 1/7/2013 AcoustiQuiet /UCoustic User Guide. The AcoustiQuiet /UCoustic User Guide provides useful

Education Evolution: Scalable Server Rooms George Lantouris Client Relationship Manager (Education) May 2009

Using Simulation to Improve Data Center Efficiency

How To Build A T7 Rack For Information Technology

IT White Paper MANAGING EXTREME HEAT: COOLING STRATEGIES FOR HIGH-DENSITY SYSTEMS

DATA CENTER SOLUTIONS GUIDE

CASE STUDY. BendBroadband Breaks New Ground. Colocation Data Center. Optimize. Store. Secure

Virtual Data Centre Design A blueprint for success

HIGHLY EFFICIENT COOLING FOR YOUR DATA CENTRE

Dimensions (HxDxW) Total Cabinet Area 89.96x x 24 in/ 2,285 x 1, x mm 95.47x 48 x 32 in/ 2,425 x 1,219.2 x 812.

Driving Data Center Efficiency Through the Adoption of Best Practices

Power and Cooling Guidelines for Deploying IT in Colocation Data Centers

Environmental Data Center Management and Monitoring

Overview & Design of Data Center Cabinets

Product Brochure. Airflow Management Solutions

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Choosing Close-Coupled IT Cooling Solutions

How To Cool A Data Center

Anixter ipassured SM for Data Centres. Products. Technology. Services. Delivered Globally.

Center Thermal Management Can Live Together

How To Run A Data Center Efficiently

Data Center Power Consumption

How To Power And Cool A Data Center

DCIM Data Center Infrastructure Monitoring

Thermal Monitoring Best Practices Benefits gained through proper deployment and utilizing sensors that work with evolving monitoring systems

Creating Order from Chaos in Data Centers and Server Rooms

Improving Data Center Efficiency with Rack or Row Cooling Devices:

Cisco Nexus 7000 Series.

In Row Cooling Options for High Density IT Applications

- White Paper - Data Centre Cooling. Best Practice

Transcription:

CABINETS: ENCLOSED THERMAL MOUNTING MANAGEMENT SYSTEMS WHITE PAPER Supporting Cisco Switches In Hot Aisle/Cold Aisle Data Centers 800-834-4969 techsupport@chatsworth.com www.chatsworth.com All products quoted are subject to availability based on manufacturing capacity and shipping dates should be considered estimates only. While every effort has been made to ensure the accuracy of all information, CPI does not accept liability for any errors or omissionsand reserves the right to change information and descriptions of listed services and products 2007 Chatsworth Products, Inc. All rights reserved. CPI, Saf-T-Grip, Seismic Frame, SlimFrame and MegaFrame are federally registered trademarks of Chatsworth Products, Inc. CPI Passive Cooling, Cube-iT Plus, EasySwing, FastTrac, Multi-Mount, Structured Termination Systems, TeraFrame, Thinline, QuadraRack and Universal Rack are trademarks of Chatsworth Products, Inc. All other trademarks belong to their respective companies. MKT-60020-367 NB Rev2 05/07

Cisco Switches in Data Center Cabinets The installation of Cisco switches inside equipment cabinets in the data center is a growing trend. Critical to supporting this trend is the need to address the right-to-left airflow pattern of Cisco 6500 and 9500 switches. Specific steps must be taken to prevent internal re-circulation and the hot exhaust from one switch raising the intake temperature of an adjacent switch. It is also highly desirable to place this equipment in a hot aisle/cold aisle layout without disrupting the data center s planned airflow pattern. These and other issues are addressed below. Growing Thermal Management Concerns for Data Centers Data centers have been experiencing growing problems with heat for several years. Our quest for greater speeds and high capacities has driven equipment manufacturers to consume more power and generate more heat with each successive generation of product. The ASHRAE power trend chart below illustrates the increases in heat being generated by equipment. Note that the heat loads associated with network switching equipment are actually higher than the heat loads for servers. Source: Datacom Equipment Power Trends and Cooling Applications 2005 American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. Network equipment is generating more heat as a result of Power Over Ethernet (POE) and 10 Gig data speeds. These implementations require more power to flow through the equipment and results in more heat being generated. Data centers that implement POE and 10 Gig will experience higher heat loads. Impact of Poor Cabinet Layout on Heat in Data Centers Heat in equipment is generally removed using air. Internal fans move air through the equipment chassis and exhaust hot air into the surrounding environment. Since most data centers use cabinets to house their equipment, this means that the hot air is exhausted out of the equipment and into the cabinet. What happens next? 1

To effectively manage heat in the data center we must insure that the hot air flowing out of equipment cabinets is properly managed so that it does not immediately flow into the front of another cabinet. The layout of the data center is critical to managing this airflow. Unfortunately, many older data centers still have cabinets arranged in legacy layouts that do not address this critical airflow requirement. Cabinets may be found facing almost any direction, with only short rows of cabinets aligned with any consistency. The greatest threat from these legacy layouts is that hot exhaust air from one cabinet will flow directly into the front of another cabinet and cause equipment in that second cabinet to fail. Figure 1 Legacy cabinet layout In many legacy layouts hot exhaust air will flow out the rear of one cabinet and into the front of the next cabinet. This is a serious issue even when deploying only moderate heat loads in cabinets and should always be avoided. Similarly, uncontrolled airflow exiting network switches will readily flow into adjacent equipment if allowed. Whether placed in open racks or enclosed cabinets, the exhaust airflow of network switches must be managed to prevent adjacent switches from ingesting the heated exhaust air and experiencing thermal problems. Addressing Airflow Problems in Data Centers The widely adopted best practice for data center cabinet layout is called hot aisle/cold aisle. Hot aisle/cold aisle establishes dedicated aisles for delivering cold air in front of equipment cabinets and dedicated hot aisles behind cabinets. Arranging the cabinets so that front doors always face front doors across the cold aisle and rear doors always face rear doors across the hot aisle produces a layout that helps segregate the hot exhaust air and minimize the risk that hot air will be ingested through the front of the cabinet and into the equipment. Hot Aisle Cold Aisle Hot Aisle Cold Aisle Figure 2 Hot aisle/cold aisle cabinet layout 2

A hot aisle/cold aisle layout begins to address some of the thermal issues present in the legacy data center. The hot aisle/cold aisle layout recognizes and leverages some of the known airflow patterns associated with equipment cabinets. In today s data centers, cabinets are expected to promote the hot aisle/cold aisle airflow paradigm. Cabinets are viewed as part of the solution, and as data center heat densities increase, cabinets will be expected to evolve and improve in order to support these increasing demands. Server Equipment Airflow Servers and many other pieces of equipment in the data center route air in through the front of the equipment and out the rear. This complements the front-to-rear airflow pattern desired for the cabinet when it is situated in a hot aisle/cold aisle data center. Figure 3 Typical server airflow When equipment such as servers is placed in cabinets, it is important to eliminate re-circulation of hot exhaust air internally within the cabinet. Only two simple steps are required. First, filler panels (also called blanking panels) must be installed in every empty rack space where equipment is not installed. This prevents hot air in the rear of the cabinet from traveling forward between installed equipment and being drawn into the server inlets. Second, the perimeter of the equipment mounting space must be closed off so that hot air in the rear can not travel around the sides or over the top or under the bottom of the installed equipment and return to the front of the cabinet. The cabinet accessory that closes off this re-circulation path is called an air dam. Once these air dams and filler panels are installed, internal re-circulation of hot exhaust air has been eliminated and the focus should be on balancing the volume of cold air delivered in the cold aisle with the volume of air required by the equipment. Network Equipment Airflow Network switches, specifically Cisco 6500 series and 9500 series switches, utilize an airflow pattern that draws cold air into the right side of the chassis and exhausts hot air out of the left side. This side-to-side airflow pattern can create problems, especially when installed in cabinets. 3

Figure 4 Cisco switch side-to-side airflow When Cisco switches are installed in adjacent cabinets or open racks, the hot exhaust air from one switch will readily flow directly into the air intake of the adjacent switch. A barrier must be placed between adjacent switches to prevent this cascading heat effect. Left alone, this effect will elevate the operating temperatures of the downstream switches and will usually result in thermal shutdown of the third switch if placed side-by-side. Since Cisco network switches draw their cooling air from the right side of the chassis and exhaust hot air out the left side, preventing re-circulation of hot air inside the cabinet can be more of a challenge than with servers. Filler panels still play an important, if slightly different role, but air dams in the sides of the cabinet are not an option. As discussed earlier, server airflow patterns facilitate front-to-rear airflow through the cabinet. It is also significant that servers have relatively few power and data connections and these are typically located at the rear of the equipment. Network equipment airflow, specifically Cisco 6500 and 9500 series switches side-toside airflow pattern, does not typically work well in cabinets. Furthermore, switches represent a high-density cabling application and require very high capacities of cable ingress and cable management. When we look at the needs of network switch equipment, we find that typical data center cabinets fall short. Cabinets have been greatly improved over the past ten years and have grown to meet the increasing needs of servers in the data center; however, the networking side of the data center has not been well addressed by cabinets. The specific issues associated with Cisco switches and networking applications include managing the side-to-side airflow pattern in a hot aisle/cold aisle environment, eliminating hot exhaust air re-circulation inside the cabinet, preventing hot exhaust air from directly entering adjacent equipment, managing cabling so that cool air can be effectively delivered to the switch intake, and providing adequate cable routing inside the cabinet as well as cable ingress to the cabinet. A cabinet that is intended to support Cisco switches and other networking applications, such as copper and fiber patching, needs to effectively manage the side-to-side airflow pattern of this equipment in a hot aisle/cold aisle data center as well as address the high-density of cabling that is always present in network applications. 4

As the heat loads associated with computing and network equipment continue to climb, cabinet manufacturers are being asked to deliver solutions that can support this equipment. It is important to consider both the current and future requirements of the data center when planning a cabinet installation so that capacity will be available when new equipment is deployed. Not only do the heat loads need to be addressed, but the architectural differences between servers and network switches require that we consider how well the cabinet can serve these divergent requirements. Designing Space Inside the Network Cabinet In order to deliver the required space for network cables and side-to-side airflow, a wide cabinet footprint is required. It is recommended that a minimum width of 800 mm (32 ) be provided in a network cabinet so that adequate room exists on either side of the Cisco switch for airflow. However, not just any wide cabinet architecture will deliver an effective solution. A recessed frame design delivers both the space required for side-to-side air flow as well as superior cable access for network installations. Subsequent equipment moves, adds and changes also benefit from this architecture. Figure 5 Offset bracket creating recessed frame design When the doors and side panels are held away from the cabinet frame by offset brackets, large volumes of space are created which allow air and cables to be routed outside of the frame. The space in the sides of the cabinet with a recessed frame makes it possible to move air to the right side intake and away from the left side exhaust of Cisco 6500 and 9500 series switches. Another significant benefit of this recessed frame design is that cable bundles are directly accessible from the front or rear and exceptional cable management space is provided between cabinets that are bayed together. All of this space makes the routing of cables much easier to manage than in typical cabinets. Redirecting Side-to-Side Airflow into the Hot Aisle Cabinets in the data center are typically a critical part of the airflow management design and have a significant impact on the thermal performance of the data center. In the hot aisle/cold aisle layout, cabinets should promote the front-to-rear movement of air through the cabinets while the row of cabinets creates the wall that helps segregate the hot exhaust air in the hot aisle from the cold air in the cold aisle. 5

In order for Cisco switches to perform well in cabinets, the exhaust airflow must be directly managed. The hot exhaust air must be captured and conveyed out the rear of the cabinet. If the exhaust air is completely contained and directed out the rear, then the potential for re-circulation of hot air inside the cabinet is eliminated and hot exhaust air is prevented from directly entering adjacent equipment. Figure 6 Re-directing exhaust out rear of cabinet The diagram and Computational Fluid Dynamics (CFD) image above show how a Network Switch Exhaust Duct captures and directs all of the hot exhaust air out the rear of the cabinet and into the hot aisle. This effectively addresses the exhaust side issues related to placing Cisco 6500 and 9500 series switches in cabinets. Managing the Intake Air Stream While removing the hot exhaust air that exits from the left side of the switch is critical, it is equally important to manage the flow of cold air entering the intake on the right side. It is not enough to provide the required volume of cold air. It is also important to control the pathways that the intake air follows as it approaches the right side of the switch. Filler panels must be installed in the open rack-mount spaces above and below each Cisco switch chassis. It is recommended, as with server cabinets, that all open rack spaces be filled. Additionally, the open rack spaces at the rear of the cabinet should be blocked using filler panels. Cables must be carefully managed in the right front corner of the cabinet. Given the placement of the fan module in the Cisco chassis, it is common practice to route all of the cables on the front of the switch to the right. Figure 7 Cables on front of Cisco switch 6

The large volume of cables on the right side of the cabinet need to be bundled and managed. Space must be provided for air to flow between the cable bundles and the cabinet side panel. The cable bundles must also be prevented from sliding rearward in the cabinet and blocking the air intake of the switch. It is important that the switch be located in the cabinet so that none of the intake or exhaust screens are occluded by the equipment rails or cabinet structural members. Since there will be a significant quantity of cables, especially if more than one switch is installed in the cabinet, the routing of these cable bundles must be given consideration. A cable routing guide should be provided by the cabinet manufacturer so that adequate planning can be performed before the cabling is installed. Given that cable routing and cable management is so important to the intake airflow of the Cisco switch, a cabinet intended to house this equipment must provide solutions for high-density cabling. As discussed previously, the recommended cable routing within the cabinet must be defined by the cabinet manufacturer so that airflow requirements can be addressed. Figure 8 Cable routing inside cabinet When more than one switch is installed in the cabinet, a routing plan must be developed so that the cable installation does not undermine the desired airflow pattern and restrict airflow to the switch. Providing Cable Management for High-Density Cabling Given proper cabinet frame architecture, enhanced cable management features should also be provided. These features should include support for cables coming off of the switch and into the right front corner of the cabinet. Fingers aligned with each rack-mount space provide support and help keep cables organized. 7

Figure 9 Cable management inside cabinet In addition to support for the cables routed to the right off the front of the switch, features within the cabinet should be present to prevent the vertical run of cables within the cabinet from drifting rearward and covering part of the switch s air intake. Brackets or cable spools oriented sideways should be mounted to the side of the frame to create a barrier, but without restricting airflow from the front of the cabinet. Performance Testing Performance testing is critical to evaluating any thermal management solution. Chatsworth Products, Inc. (CPI) has performed extensive Computational Fluid Dynamics (CFD) analysis as well as laboratory testing on the N-Series TeraFrame Network Cabinet with the Network Switch Exhaust Duct installed. CFD images earlier in this document are from this analysis. CPI s N-Series TeraFrame Cabinet has been designed to meet the Cisco third party cabinet specifications. Figure 10 Laboratory thermal testing 8

During the laboratory testing, the cabinet was loaded with switches; Cisco 9513 unit and Cisco 6509 unit. A baseline was established by taking measurements without side panels, doors or an exhaust duct. Then multiple airflow configurations were tested to evaluate the thermal impact of sides, doors and the exhaust duct. Results showed that operating temperatures were reduced by the use of a network switch exhaust duct as compared with the enclosed cabinet without the duct. Laboratory testing and CFD analysis show the specific benefits of managing airflow around Cisco switches inside a cabinet. Conclusion The specific airflow requirements and cable densities associated with Cisco 6500 and 9500 series switches create unique demands on cabinets. In order for a network cabinet to successfully meet these challenges it must manage the side-to-side airflow pattern in a hot aisle/cold aisle environment, eliminate hot exhaust re-circulation inside the cabinet, prevent hot exhaust air from directly entering adjacent equipment, manage cabling so that cool air can be effectively delivered to the switch intake and provide adequate cable routing inside the cabinet as well as cable ingress to the cabinet. CPI s N-Series TeraFrame Network Cabinet uses managed exhaust airflow to deliver superior thermal management by re-directing the right-to-left airflow of switching equipment to a front-to-back airflow pattern that fits into hot aisle/cold aisle data centers. The N-Series TeraFrame Cabinet also utilizes a recessed frame design that provides the space required to support the high-density cabling environment created by the switch as well as superior airflow characteristics. This simple yet effective approach delivers effective cable management and enhanced cooling of equipment resulting in a more efficient use of available cool air and better overall heat transfer away from equipment. CPI s design has proven to be effective at managing the heat loads associated with modular switching equipment, offering a simple, passive solution. 9