The supercomputer for particle physics at the ULB-VUB computing center

Similar documents
(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015

GRID computing at LHC Science without Borders

A Physics Approach to Big Data. Adam Kocoloski, PhD CTO Cloudant

HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS EKSPERYMENTY FIZYKI WYSOKICH ENERGII W SIECIACH KOMPUTEROWYCH GRID. 1.

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions

Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer. Frank Würthwein Rick Wagner August 5th, 2013

Computing at the HL-LHC

LHC schedule: what does it imply for SRM deployment? CERN, July 2007

SUPERCOMPUTING FACILITY INAUGURATED AT BARC

Using S3 cloud storage with ROOT and CernVMFS. Maria Arsuaga-Rios Seppo Heikkila Dirk Duellmann Rene Meusel Jakob Blomer Ben Couturier

Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb

PoS(EGICF12-EMITC2)110

Operation and Performance of the CMS Silicon Tracker

Tier0 plans and security and backup policy proposals

An Integrated CyberSecurity Approach for HEP Grids. Workshop Report.

Silicon Sensors for CMS Tracker at High-Luminosity Environment - Challenges in particle detection -

Track Trigger and Modules For the HLT

Leveraging OpenStack Private Clouds

Grid Computing in Aachen

A Simulation Model for Grid Scheduling Analysis and Optimization

HEP computing and Grid computing & Big Data

Testing the In-Memory Column Store for in-database physics analysis. Dr. Maaike Limper

From Distributed Computing to Distributed Artificial Intelligence

How To Teach Physics At The Lhc

Grid Initiative Othmane Bouhali

HADOOP, a newly emerged Java-based software framework, Hadoop Distributed File System for the Grid

Sun Constellation System: The Open Petascale Computing Architecture

Jet Reconstruction in CMS using Charged Tracks only

Big Data Analytics. for the Exploitation of the CERN Accelerator Complex. Antonio Romero Marín

Chapter 12 Distributed Storage

High Availability Databases based on Oracle 10g RAC on Linux

Scientific Cloud Computing Infrastructure for Europe Strategic Plan. Bob Jones,

Linux and the Higgs Particle

Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary

Cloud Computing and Amazon Web Services

Data Intensive Science Education

Managed GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team

Performance monitoring at CERN openlab. July 20 th 2012 Andrzej Nowak, CERN openlab

Theoretical Particle Physics FYTN04: Oral Exam Questions, version ht15

The CMS All Silicon Tracker

Data analysis in Par,cle Physics

Large Hadron Collider am CERN

Techniques for implementing & running robust and reliable DB-centric Grid Applications

New Jersey Big Data Alliance

Report from SARA/NIKHEF T1 and associated T2s

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1

UNIVERSITY OF SPLIT FACULTY OF ELECTRICAL ENGINEERING, MECHANICAL ENGINNERING AND NAVAL ARHITECTURE (FESB) PRESENTATION. Prof.

Precision Tracking Test Beams at the DESY-II Synchrotron. Simon Spannagel DPG 2014 T88.7 Mainz,

ALICE GRID & Kolkata Tier-2

HEP GROUP UNIANDES - COLOMBIA

Web based monitoring in the CMS experiment at CERN

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007

Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de

From raw data to Pbytes on disk The world wide LHC Computing Grid

Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing

ODIS 2 A database on intermediary structures in Europe, 19th & 20th centuries

arxiv: v1 [physics.ins-det] 4 Feb 2014

Cloud Computing & Sizing Servers

Database Services for CERN

Transcription:

The supercomputer for particle physics at the ULB-VUB computing center P. Vanlaer Université Libre de Bruxelles Interuniversity Institute for High Energies (ULB-VUB) Tier-2 cluster inauguration ULB, May 8, 2009 1

The Large Hadron Collider at CERN Search for the Brout-Englert-Higgs boson Search for the nature of Dark Matter Supersymmetry? Search for clues of unification of gravity and the other interactions Extra space dimensions? Micro black holes?... Made possible by unique accelerator characteristics Factor >5 jump in collision energy Factor 100 jump in beam luminosity 2

The Large Hadron Collider at CERN Search for the Brout-Englert-Higgs boson Search for the nature of Dark Matter Supersymmetry? Search for clues of unification of gravity and the other interactions Extra space dimensions? Micro black holes?... Made possible by unique accelerator characteristics Factor >5 jump in collision energy Factor 100 jump in beam luminosity

The Compact Muon Solenoid (CMS)

Contributions of ULB-VUB Interuniversity Institute for High Energies (ULBVUB together with U.Antwerpen) contributes To central tracker (>200 m2 of silicon sensors) since 1992 To muon system (resistive plate chambers) since 2000 1800 silicon detecto rs or assembled in Brussels Silico n tracker endcap on (15. 000 in to tal) ot

Insertio n of on of central tracker in CM S, December 2007 CM S clos for beam, osed and ready ady September 3, 2008

CMS in operation Mu on s from b eam d u m p Cosmic rays 600.106 collected CM S First LHC beam, September 10 ATLAS LHC b Alice

Worldwide LHC Computing Grid (W-LCG) LHC data processing is a challenge The Grid is an infrastructure that provides seamless access to computing power and data storage capacity distributed over the globe 109 events/year/experiment 100,000 processors 15 million Gigabytes of disk/year 20% resources provided by CERN As the World Wide Web provides seamless access to information stored in million different locations Distributed, heterogeneous resources connected by the Internet are put in common Through a software interface Grid middleware Transparent for the users

The Worldwide LHC Computing Grid (W-LCG) Grid infras tructure pro je ct co - funde d by the Euro pe an Co m m is s io n - no w in 3 rd phas e w ith partne rs in 4 5 co untrie s In pro ductio n 2 4 0 s ite s 4 5 c o u ntrie s ~ 1 0 0,0 0 0 CPUs 1 2 Pe taby te s > 5 0 0 0 us e rs > 2 6 0 VOs > 1 0 0,0 0 0 jo bs / day

CMS hierarchy of computing centers Prom pt processing C ER N Analysis Facility CERN TIER 0 600M B/s 50 500M B/s TIER 1 TIER 1 G erm any Italy TIER 1 France 50 500M B/s TIER 2 TIER TI 2ER TI 2ER 2 Belgium C alibration D ata Q uality M onitoring R eprocessing, Subsam ples for analyses Sim ulations Analysis

The Belgian Tier-2 Joint project of particle physics groups from ULB, VUB, UA, UMH, UCL, UGent Funded by F.R.S.-FNRS, FWO, Vlaams gemeenschap and ULB for a total budget of 1.5 MEuros Federation with 2 sites ULB-VUB Computing Center CISM (Louvain-la-Neuve) Serving ~70 physicists in Belgium Staff at IIHE: 3 persons (2 FTE) at ULB-VUB: Shkelzen Rugovac, Stephane Gerard (ULB), Olivier Devroede (VUB) Key contributions also from: Othmane Bouhali (ULB), on leave of absence for computing center of University of Qatar Stijn De Weirdt, now at the Vlaams Supercomputing Center

The IIHE cluster CPU: 148 cores from BEgrid in 32 physical nodes for 1.3 Tflops 276 cores from CMS in 47 physical nodes for 2.6 Tflops 424 cores in total in 79 physical nodes for 3.9 Tflops Disk: 260 TB for CMS Will increase by factor 4 until 2011 (full LHC intensity) Partnership with BELNET Grid project BEgrid Picture of cluster

Implantation at the ULB-VUB computing center Critical infrastructures were available Scale savings High bandwidth to GEANT research network 1 Gb/s to CERN and other Tier-1 centers Cooling and air conditioning Expected heat dissipation ~40kW in 2011 High power electric lines Protection against power glitches Use of spare cooling power at ULB-VUB computing center Common investment in larger power glitch protection (UPS) Sharing of responsibilities Computing Center: infrastructures, firewall IIHE: cluster deployment, software deployment, user support

The LHC Grid at work CERN: 11% Aver age d at a t r an s fer volu m e Average Tier 1: 35% Req u ir ed b u r sstt level 1 s t year Teraabb yt es p er d ay ay Ter Tier 2: 54%

3 %of w orld grid res ources at ULB- VUB

Performance of IIHE cluster ~ 45.000 job s p er m on t h Gb it / s in b ou n d an d ou t b ou n d In t op 10 of of m os t reliab le s it es

Use of cluster beyond particle physics 10% of CPU freely accessible to owners of Grid account Accounts are managed centrally Grid use per domain Access rights and priorities are managed per research project Certification Authority: BELNET Delegates to Registration Authorities (1 per University) Virtual Organizations (VO's) Generic VO beapps for BELNET users, supported on IIHE cluster Share per VO in 2008: CMS: 83% Beapps: 12.2% Ice3 experiment, mostly Betest: 1.7%

Other achievements last 5 years 11 Master theses 4 with the Faculties of Sciences and of Applied Sciences 7 in collaboration with Morocco 15 talks at international conferences CMS Monte-Carlo production team in 2007 Management of 20% of simulated samples for the collaboration CMS software deployment team Seminars and tutorials with Belnet

Thanks Technical teams of IIHE and of Computing Center Shkelzen, Olivier, Stephane, Stijn, Othmane, Abdel, Georges, Edwin Simone De Brouwer, Andre Schierlinckx, Paul Raeymaekers Funding agencies FRS-FNRS, FWO, Vlaams gemeenschap, ULB ULB-VUB authorities Support in installing cluster at computing center premisses

Backup

The LHC computing challenge 109 events/year/experiment ~100,000 processors required 15 million Gigabytes of disk per year Funding raised locally CERN provides 20% of resources Efficient analysis worldwide Grid technologies <num ber>

The CMS collaboration