A new data-centre for the LHCb experiment

Similar documents
How To Build A Network For Storage Area Network (San)

ALWAYS ON GLOBALSWITCH.COM

THE WORLD S MOST EFFICIENT DATA CENTER

Datacentre South London Data sheet

European Code of Conduct for Data Centre. Presentation of the Awards 2016

Extended Distance SAN with MC/ServiceGuard Opens New Disaster Recovery Opportunities

Datacentre Reading East 2 Data sheet

Colocation Selection Guide. Workspace Technology Ltd

INTERNATIONAL TELECOMMUNICATION UNION SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF CABLES AND OTHER ELEMENTS OF OUTSIDE PLANT

Introduction to Cloud Computing

Eni Green Data Center - Specification

A G U I D E T O C O - L O C A T I O N

611 Tradewind Dr. Suite 100, Ancaster ON, L9G 4V5 (905) ext 244

Global Data Center & Colocation Facilities

CEMEP UPS. The high environmental performance solution for power supply to high-availability data centers

Enabling an agile Data Centre in a (Fr)agile market

Overview of Green Energy Strategies and Techniques for Modern Data Centers

GHG Protocol Product Life Cycle Accounting and Reporting Standard ICT Sector Guidance. Chapter 8: 8 Guide for assessing GHG emissions of Data Centers

UltraVMTM Dedicated Cloud Servers

Measuring Power in your Data Center: The Roadmap to your PUE and Carbon Footprint

Automation's Modular Containerized Datacenter Solutions

SYMMETRY PRODUCT OVERVIEW

Computing at the HL-LHC

SYMMETRY. DATASHEET ACCESS CONTROL Product Overview

Managed Services for Carriers and Service Providers

Network Attached Storage 300

Data Centre Stockholm II, Sweden Flexible, advanced and efficient by design.

Zurich West Data Center

Grid Computing in Aachen

The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)"

Portable Energy Efficient Data Centre Solution

ExaGrid Stress-free Backup Storage

EU Code of Conduct for Data Centre Efficiency

Specialty Environment Design Mission Critical Facilities

Protect Data... in the Cloud

How To Run An Ultravm Cloud Server

Fiscal Year Information Technology Request

Moving Virtual Storage to the Cloud. Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage

Data Centre Basiglio, Milan Flexible, advanced and efficient by design.

Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability)

T:

Data Sheet Fujitsu PRIMERGY BX400 S1 Blade Server

Perceptive Software Platform Services

When Does Colocation Become Competitive With The Public Cloud? WHITE PAPER SEPTEMBER 2014

Leveraging Embedded Fiber Optic Infrastructures for Scalable Broadband Services

GAMATRONIC ELECTRONIC INDUSTRIES LTD.

...DYNAMiC INTERNET SOLUTiONS >> Reg.No. 1995/020215/23

AIRAH Presentation April 30 th 2014

When Does Colocation Become Competitive With The Public Cloud?

Maintaining HMI and SCADA Systems Through Computer Virtualization

Last time. Data Center as a Computer. Today. Data Center Construction (and management)

TotalStorage Network Attached Storage 300G Cost effective integration of NAS and LAN solutions

The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group

Every business owns precious data and every business should recognise the risk of losing their data. Natural Disaster

CHICAGO S PREMIERE DATA CENTER

DOE Data Center Energy Efficiency Program

Data Center Solutions - The Containerized Evolution

Data Center Facility Basics

Quick Reference Guide: Server Hosting

Value Proposition for Data Centers

Colt Colocation Services Colt Technology Services Group Limited. All rights reserved.

Datacentre Maidenhead P1 Data sheet

Case study; A sustainable Green Data Centre, fact or fiction?

Intacore Managed IT Services

Moving Virtual Storage to the Cloud

Modular Data Centre. Flexible, scalable high performance modular data centre solutions. Data Centre Solutions...energy efficient by design

ABOUT DISK BACKUP WITH DEDUPLICATION

Top 8. Considerations for Choosing the Right Rack Power Distribution Unit to Fit Your Needs

Infrastructure as a Service

Premium Data Centre Europe Pricing, Business Model & Services

The Mission Critical Data Center Understand Complexity Improve Performance

How To Host A Website On A Server At Zen Internet

Colocation, Cloud and Managed Services

Data Sheet Fujitsu PRIMERGY BX600 S3 Blade Server

Green Computing: Datacentres

Case Study: Guaranteed efficiency through standardised data centres

G-Cloud V Services Pricing Document Accenture IaaS Services

SAN TECHNICAL - DETAILS/ SPECIFICATIONS

Powerful Dedicated Servers

Datacenter Efficiency

Managing Data Center Growth Explore Your Options

AIA Provider: Colorado Green Building Guild Provider Number: Speaker: Geoff Overland

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

Data Centre Services. JT First Tower Lane Data Centre Facility Product Description

Transcription:

A new data-centre for the LHCb experiment Loïc Brarda, Beat Jost, Daniel Lacarrère, Rolf Lindner, Niko Neufeld, Laurent Roy, Eric Thomas Physics Department CERN CH-1211 Geneva 23, Switzerland Computing in High Energy and Nuclear Physics, 2012 Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 1 / 31

Overview 1 The LHCb upgrade, its DAQ and the requirements DAQ Farm and data-centre 2 Implementation Options Brick and mortar Remote hosting Modular / Containerized data-centre Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 2 / 31

The LHCb upgrade, its DAQ and the requirements The LHCb upgrade Major upgrade of the entire experiment during LS2 (2017 to 2018) Freely adjustable low-level trigger allowing readout speeds between 40 MHz and 1 MHz Allows to readout the entire detector at bunch-crossing rate (40 MHz) Sustain a luminosity up to 2 10 33 Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 3 / 31

Outline The LHCb upgrade, its DAQ and the requirements DAQ 1 The LHCb upgrade, its DAQ and the requirements DAQ Farm and data-centre 2 Implementation Options Brick and mortar Remote hosting Modular / Containerized data-centre Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 4 / 31

The LHCb upgrade, its DAQ and the requirements A 40 MHz DAQ DAQ Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 5 / 31

The LHCb upgrade, its DAQ and the requirements DAQ Data Acquisition Requirements # of input links 10000 DAQ bandwidth per input link 3.2Gbit/s average total event-size 100 kb total bandwidth for the DAQ 32 Tbit/s output bandwidth 2 Gigabyte/s The data produced by a bunch-crossing in the collsion need to be zero-suppressed directly on the detector to reduce the number of input links from the detector. Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 6 / 31

Outline The LHCb upgrade, its DAQ and the requirements Farm and data-centre 1 The LHCb upgrade, its DAQ and the requirements DAQ Farm and data-centre 2 Implementation Options Brick and mortar Remote hosting Modular / Containerized data-centre Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 7 / 31

The LHCb upgrade, its DAQ and the requirements Farm and data-centre Event Filter Farm requirements Capable of processing 30 Million events per second (out of 40 Million bunch-crossings) Scalable architecture (staged deployment) Houses up to 5000 dual-socket industry-standard servers Servers may be more general Compute Units (combination of servers and co-processor cards) A server is understood to be a dual-socket machine, with at least 1 GB of RAM per hardware-thread Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 8 / 31

The LHCb upgrade, its DAQ and the requirements Data centre requirements Farm and data-centre max # of servers 5000 max # of useful Us for servers 2500 number of Us for switches 2 Us for 36 servers Us for patch-panels per rack 3 depth of the racks min 1000 mm number of power feeds per rack 2 total usable IT power 2 MW minimum power per server 350 W low PUE long-term usefullness low cost (overall budget for the upgrade 50 MCHF) Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 9 / 31

The LHCb upgrade, its DAQ and the requirements Non-requirements Farm and data-centre Safe power Run servers from NFS root Short processing time neglible losses Detector (data-source) not on safe power either Storage racks in the data-centre are not required A small amount of onsite storage can be kept in an existing server-room Power-efficient CPUs optimize CPU per unit of money only CERN is a very large power-consumer so it gets good prices for electricity We can go for very high power-density in a rack Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 10 / 31

Implementation Options The current farm Installed 100 m underground (UX85A) Accessible only by authorized personel under radiological supervision Reuses old detector-electronics racks (non standard posts, depth) Power and cooling limited to 500 kw Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 11 / 31

The current farm 2 Implementation Options Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 12 / 31

Outline Implementation Options Brick and mortar 1 The LHCb upgrade, its DAQ and the requirements DAQ Farm and data-centre 2 Implementation Options Brick and mortar Remote hosting Modular / Containerized data-centre Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 13 / 31

Implementation Options Brick and mortar A brick-and-mortar data-centre Is said to cost 10 M$ per MW of IT capacity [1] This figure assumes a commercial grade facility: full battery backup, fly-wheels, redundancy,... Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 14 / 31

Implementation Options Brick and mortar (Cost-)factors in our favour Site available with high-power feeds and cooling plant Industrial site - only standard safety and environmental constraints can be ugly and noisy as long as it is cheap Single-purpose ( customer ) facility - no security sensitive data only theft protection required Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 15 / 31

Implementation Options Brick and mortar The Point-8 site Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 16 / 31

Implementation Options Brick and mortar The first idea Collocated building with new control room and offices Utility feed from the basement Water-cooled racks (heat-exchanger doors) Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 17 / 31

Outline Implementation Options Remote hosting 1 The LHCb upgrade, its DAQ and the requirements DAQ Farm and data-centre 2 Implementation Options Brick and mortar Remote hosting Modular / Containerized data-centre Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 18 / 31

Implementation Options Remote hosting The first suggestion from management: remote hosting Main argument: limited life-time of LHCb experiment (about 10 years after 2017). Long-term interest of CERN Minor argument: done by CERN/IT, use the "Cloud" Actually 3 options 1 Remote hosting off the CERN site 2 Remote hosting on the CERN site in an existing centre 3 Remote hosting on the CERN site in a new centre much more expensive than on-site - won t discuss further Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 19 / 31

Implementation Options Remote hosting A technical aside: data-transport on a small number of fibres In both remote hosting options it is necessary to transport 32 Tbit/s over a very small number of fibres This is obvious for off-site hosting (cost of fibre / bandwidth rental) Considerations are based on 10 Gbit/s and 100 Gbit/s technologies Minimal number of fibre-pairs using e.g. LR4 (25 Gbit/s signalling) 1280 if only a single wave-length is used 400 lambdas and more can be multiplexed on a single fibre Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 20 / 31

Implementation Options Remote hosting Case-study on data-transport A detailed study with one major vendor has been done, assuming an active solution (where the individual 10 Gigabit lanes are fully monitored and managed, as opposed to passive multiplexing) Assumed about 10 km distance, direct connection No cost for fibre laying or rental Cost per 10 Gigabit is found to be 7000 USD for 2016 (including an aggressive price-compression per year) This includes redundancy and a failver within a few minutes Alternatives are using coloured interfaces at the output of the DAQ network (expensive) and passive multiplexing (DWDM) (difficult to manage) In any case privately such an infrastructure requires trained personel and has high maintenance costs unrealistic in our environment Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 21 / 31

Implementation Options Remote hosting Remote hosting off the CERN site Costs Renting the data-centre Power Renting of the data-path Data-transport Last point alone is significantly above budget Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 22 / 31

Implementation Options Remote hosting Remote hosting on the CERN site Assume that a large part (about 2 3 ) of the capacity of the computer centre 513 could be used Should fit easily within 10 km Can use multiplexing (expensive, see before) or install 10 Gigabit fibres (also expensive 12 MCHF according to a CERN internal estimate) Seems expensive in comparison to the next option Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 23 / 31

Outline Implementation Options Modular / Containerized data-centre 1 The LHCb upgrade, its DAQ and the requirements DAQ Farm and data-centre 2 Implementation Options Brick and mortar Remote hosting Modular / Containerized data-centre Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 24 / 31

Implementation Options Modular / Containerized data-centre The solution: modular data-centre Studying in detail various cooling solutions showed that to achieve a very low PUE < 1.2 the PUE must be the primary design criterion Constraints on building are conflict with co-use as office-building and control room hosting Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 25 / 31

Implementation Options Modular / Containerized data-centre Advantages of containerized solution Lowest PUE (when using free cooling) - optimised design with integrated control Ease of re-use at the CERN site Capital investment costs lower or at least not higher than brick and mortar Could be re-sold if no more need Separation of IT infrastructure and office-usage Ready to use - requires only power (and a foundation) Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 26 / 31

How it could look like Implementation Options Modular / Containerized data-centre Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 27 / 31

Implementation Options Modular / Containerized data-centre Results from prototype studies From discussions with two major providers we established: the feasibility (budget) vendor neutrality (for the servers) most likely no need for additional cooling according to latest ASHRAE recommendations [2] Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 28 / 31

Conclusion Summary Taking advantage of the special requirements for the LHCb event-filter farm a cost-effective hosting in on-site containers has been identified Other options are more expensive and often suffer from the need of a long-distance transport of a large amount of data Such a solution will offer the best PUE and can most likely operate without any additional cooling Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 29 / 31

Conclusion What s next? Start construction of new control-room and office-building next year Transfer LHCb CR at the end of LS1 Define final requirements for modular data-centre Competitive call for tender will be done in 2017 Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 30 / 31

For Further Reading I Appendix For Further Reading M. Bramfitt and H. Coles. Modular/Container Data Centers Procurement Guide: Optimizing for Energy Efficiency and Quick Deployment. [Online]. Available: http://hightech.lbl.gov/documents/data_centers/modular-dcprocurement-guide.pdf, 2 2011. the green grid. Updated Air-Side Free Cooling Maps: The Impact of ASHRAE 2011 Allowable Ranges, 2012. [Online]. http://www.thegreengrid.org/ /media/whitepapers/wp46updatedairsidefreecoolingmapstheimpactof Niko Neufeld (CERN, Geneva, Switzerland) A new data-centre for the LHCb experiment CHEP 2012 31 / 31