UC Berkeley Data Center Overview Shelton Waggener Associate Vice Chancellor & Chief Information Office shelw@berkeley.edu August 2006
Berkeley s Data Center Opened July 2004
Our Previous Facility (don t call it a data center) Floods Electrical Aging Facility Out of space Seismic Plague, Rats, Downtime
Data Center Timeline 7 years to justify the need and demand 2 years to get budget approval 20 months to build base building 18 to design the data center 12 months to plan the move 6 months to build the actual data center 72 hours to move in
Design Requirements Academic Highest Priorities High Availability Low Cost 10/100/1000 Networking Secure rack Remote Access On site Access Sandbox and Safe Research Highest Priorities Fiber and Optical Infrastructure Infiniband 1000/10000 Ethernet Flexibility rack and rerack regularly High speed copper Cat 5e / 6 200 watts/sq foot (15kw rack) Needs very large Staging Administrative Highest Priority High Availability High Security Shared Services Dev, Test, Production 7/24 Operation
Flexibility Lower Raised Floor Minimum under floor utilities (only chill loops and leak detection systems Modular Support Systems Standardized cabinets and rack assignments Expandable Build in HVAC, Power, and sq foot growth Monitoring All monitoring systems component based People Space All shared open space collocated with NOC.
High Availability N+1 for all components minimum Excess Power Capacity for peak load 3MW Modular AC & Electrical High Seismic Rating Environmental Minimization Isolation from Outside Influences Advanced Facility Monitoring Systems
Security Physical Security Locking Cabinets Single Access Point Escorted guests Restricted Floor No Dark Alley in raised floor Caged Routers and Switches Monitoring and Access Staffed Entrance Positive Response Prox Card Readers Back Ground Checks IP Camera Coded Cabinet Labels
Move Opportunities Eliminate older systems Phase out legacy technologies Reduce complexity Consolidate platforms Implement standard backup and storage architectures
Costs Total Project: $11.7M for Data Center and move. Base building - $23M Electrical: 2500 KVA, $4.4M HVAC: Water Glycol 300 Tons $921k Networking: Cat 5 Cisco6509 $2.2M Move: 300 Servers $1.326M
2k sqft Phase II Expansion Space 5k 5k sqft sqft Phase Phase III III Expansion Expansion Space Space People Space
At base building, much larger steel and more of it. Note core placement of Environmental Penthouse on roof
Structural Steel 2x18x18 KBracing 24ft on center
Tube Steel welded directly to structural steel
Unistrut bolted to tube steel
No Penetrations in ceiling. Only in core then down and spread across.
Concrete filled floor tiles
Chilling Loops Under Floor
Heavy duty 18 Gauge Steel four point stanchions interwoven with structural steel. Grounding to building
Purchased with construction budget, APC in cabinet air filtration, boosters, dual PDU s
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 DSR2010 PowerEdge 6450 PowerEdge 6450 PowerEdge 6450 PowerEdge 6450 PowerEdge 6450 PowerEdge 2550 PowerEdge 2550 AMERICANPOWERCONVERSION PowerEdge 6650 T-12 Cabinet Specifics 19 Rack Standard EIA 310 D 1992 Electronic Industries Association Rack Unit = 1 ¾ x 19 x 36 Cabinet Features Fan Unit 24 IP Ports 24 Fiber Connections Doors KVM or Serial Console over IP 38 Usable RU s per rack Data Center Total Occupancy 6650 Rack Units Expect to be reaching occupancy 2008
All roof mounted environmentals Preassembled on sled and crane dropped to roof.
Seismic Bracing: Non-rack Mounted Equipment
Above Floor Connection The 'Z' shaped clamp operates on a pivot to attach to the leveling leg of equipment Equipment without leveling legs can be braced via a bracket attached to the frame Clamps and under floor attachments can be installed while equipment is online
Under Floor Connection The under floor attachment is made via a 'C' shaped brackets with threaded rod making the connection between the raised flooring and the concrete sub floor The under floor system provides lateral and vertical support Versatile installation allows for flexibility in congested under floor areas
Attached to seismic bracing at top and raised flooring system at bottom. Gained together All Server Cabinets Secured top and bottom and gained together
Cable plant all ladder racked with distribution nodes at each cabinet 24 ports high speed copper 24 ports high speed fiber
Geographic Distribution for copper determined color Fiber chases at patching core
Non-rack Mount Equipment in Server Cabinets Legacy Equipment Near End of Life Strapping and Quake Grip mats
Tape Vault Attached to seismic bracing at top and raised flooring system at bottom Expansion space incorporated into layout
Lessons Learned Don t rely on your campus design team or architects. Bring in a good/great consultant 80 watts sqft is plenty for design point IF you also plan in some specific HPC rack capacity Design for expansion or modular, you will need it Standardize, standardize, standardize Design for lights out Load back for full testing - regularly Use the move opportunity to plan change
Completed Data Center Slide Show
Co-location Recharge Monthly co-location charged by Rack Unit, covering Data Center Operating expenses $8.00 per Rack Unit per Month Recharge Rate Approved 9/26 Recharge rate calculated by applying occupancy levels to estimated expenses which include Staff and services Plant and equipment maintenance Monitoring and tools Inventory items Service improvement programs Subsidies