The supercomputer for particle physics at the ULB-VUB computing center
|
|
|
- Aubrey Hines
- 9 years ago
- Views:
Transcription
1 The supercomputer for particle physics at the ULB-VUB computing center P. Vanlaer Université Libre de Bruxelles Interuniversity Institute for High Energies (ULB-VUB) Tier-2 cluster inauguration ULB, May 8,
2 The Large Hadron Collider at CERN Search for the Brout-Englert-Higgs boson Search for the nature of Dark Matter Supersymmetry? Search for clues of unification of gravity and the other interactions Extra space dimensions? Micro black holes?... Made possible by unique accelerator characteristics Factor >5 jump in collision energy Factor 100 jump in beam luminosity 2
3 The Large Hadron Collider at CERN Search for the Brout-Englert-Higgs boson Search for the nature of Dark Matter Supersymmetry? Search for clues of unification of gravity and the other interactions Extra space dimensions? Micro black holes?... Made possible by unique accelerator characteristics Factor >5 jump in collision energy Factor 100 jump in beam luminosity
4 The Compact Muon Solenoid (CMS)
5 Contributions of ULB-VUB Interuniversity Institute for High Energies (ULBVUB together with U.Antwerpen) contributes To central tracker (>200 m2 of silicon sensors) since 1992 To muon system (resistive plate chambers) since silicon detecto rs or assembled in Brussels Silico n tracker endcap on ( in to tal) ot
6 Insertio n of on of central tracker in CM S, December 2007 CM S clos for beam, osed and ready ady September 3, 2008
7 CMS in operation Mu on s from b eam d u m p Cosmic rays collected CM S First LHC beam, September 10 ATLAS LHC b Alice
8 Worldwide LHC Computing Grid (W-LCG) LHC data processing is a challenge The Grid is an infrastructure that provides seamless access to computing power and data storage capacity distributed over the globe 109 events/year/experiment 100,000 processors 15 million Gigabytes of disk/year 20% resources provided by CERN As the World Wide Web provides seamless access to information stored in million different locations Distributed, heterogeneous resources connected by the Internet are put in common Through a software interface Grid middleware Transparent for the users
9 The Worldwide LHC Computing Grid (W-LCG) Grid infras tructure pro je ct co - funde d by the Euro pe an Co m m is s io n - no w in 3 rd phas e w ith partne rs in 4 5 co untrie s In pro ductio n s ite s 4 5 c o u ntrie s ~ 1 0 0,0 0 0 CPUs 1 2 Pe taby te s > us e rs > VOs > 1 0 0,0 0 0 jo bs / day
10 CMS hierarchy of computing centers Prom pt processing C ER N Analysis Facility CERN TIER 0 600M B/s M B/s TIER 1 TIER 1 G erm any Italy TIER 1 France M B/s TIER 2 TIER TI 2ER TI 2ER 2 Belgium C alibration D ata Q uality M onitoring R eprocessing, Subsam ples for analyses Sim ulations Analysis
11 The Belgian Tier-2 Joint project of particle physics groups from ULB, VUB, UA, UMH, UCL, UGent Funded by F.R.S.-FNRS, FWO, Vlaams gemeenschap and ULB for a total budget of 1.5 MEuros Federation with 2 sites ULB-VUB Computing Center CISM (Louvain-la-Neuve) Serving ~70 physicists in Belgium Staff at IIHE: 3 persons (2 FTE) at ULB-VUB: Shkelzen Rugovac, Stephane Gerard (ULB), Olivier Devroede (VUB) Key contributions also from: Othmane Bouhali (ULB), on leave of absence for computing center of University of Qatar Stijn De Weirdt, now at the Vlaams Supercomputing Center
12 The IIHE cluster CPU: 148 cores from BEgrid in 32 physical nodes for 1.3 Tflops 276 cores from CMS in 47 physical nodes for 2.6 Tflops 424 cores in total in 79 physical nodes for 3.9 Tflops Disk: 260 TB for CMS Will increase by factor 4 until 2011 (full LHC intensity) Partnership with BELNET Grid project BEgrid Picture of cluster
13 Implantation at the ULB-VUB computing center Critical infrastructures were available Scale savings High bandwidth to GEANT research network 1 Gb/s to CERN and other Tier-1 centers Cooling and air conditioning Expected heat dissipation ~40kW in 2011 High power electric lines Protection against power glitches Use of spare cooling power at ULB-VUB computing center Common investment in larger power glitch protection (UPS) Sharing of responsibilities Computing Center: infrastructures, firewall IIHE: cluster deployment, software deployment, user support
14 The LHC Grid at work CERN: 11% Aver age d at a t r an s fer volu m e Average Tier 1: 35% Req u ir ed b u r sstt level 1 s t year Teraabb yt es p er d ay ay Ter Tier 2: 54%
15 3 %of w orld grid res ources at ULB- VUB
16 Performance of IIHE cluster ~ job s p er m on t h Gb it / s in b ou n d an d ou t b ou n d In t op 10 of of m os t reliab le s it es
17 Use of cluster beyond particle physics 10% of CPU freely accessible to owners of Grid account Accounts are managed centrally Grid use per domain Access rights and priorities are managed per research project Certification Authority: BELNET Delegates to Registration Authorities (1 per University) Virtual Organizations (VO's) Generic VO beapps for BELNET users, supported on IIHE cluster Share per VO in 2008: CMS: 83% Beapps: 12.2% Ice3 experiment, mostly Betest: 1.7%
18 Other achievements last 5 years 11 Master theses 4 with the Faculties of Sciences and of Applied Sciences 7 in collaboration with Morocco 15 talks at international conferences CMS Monte-Carlo production team in 2007 Management of 20% of simulated samples for the collaboration CMS software deployment team Seminars and tutorials with Belnet
19 Thanks Technical teams of IIHE and of Computing Center Shkelzen, Olivier, Stephane, Stijn, Othmane, Abdel, Georges, Edwin Simone De Brouwer, Andre Schierlinckx, Paul Raeymaekers Funding agencies FRS-FNRS, FWO, Vlaams gemeenschap, ULB ULB-VUB authorities Support in installing cluster at computing center premisses
20 Backup
21 The LHC computing challenge 109 events/year/experiment ~100,000 processors required 15 million Gigabytes of disk per year Funding raised locally CERN provides 20% of resources Efficient analysis worldwide Grid technologies <num ber>
22 The CMS collaboration
(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015
(Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:
GRID computing at LHC Science without Borders
GRID computing at LHC Science without Borders Kajari Mazumdar Department of High Energy Physics Tata Institute of Fundamental Research, Mumbai. Disclaimer: I am a physicist whose research field induces
A Physics Approach to Big Data. Adam Kocoloski, PhD CTO Cloudant
A Physics Approach to Big Data Adam Kocoloski, PhD CTO Cloudant 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 Solenoidal Tracker at RHIC (STAR) The life of LHC data Detected by experiment Online
HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS EKSPERYMENTY FIZYKI WYSOKICH ENERGII W SIECIACH KOMPUTEROWYCH GRID. 1.
Computer Science Vol. 9 2008 Andrzej Olszewski HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS The demand for computing resources used for detector simulations and data analysis in High Energy
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer. Frank Würthwein Rick Wagner August 5th, 2013
Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer Frank Würthwein Rick Wagner August 5th, 2013 The Universe is a strange place! 67% of energy is dark energy We got no
Computing at the HL-LHC
Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,
LHC schedule: what does it imply for SRM deployment? [email protected]. CERN, July 2007
WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? [email protected] WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.
SUPERCOMPUTING FACILITY INAUGURATED AT BARC
SUPERCOMPUTING FACILITY INAUGURATED AT BARC The Honourable Prime Minister of India, Dr Manmohan Singh, inaugurated a new Supercomputing Facility at Bhabha Atomic Research Centre, Mumbai, on November 15,
Using S3 cloud storage with ROOT and CernVMFS. Maria Arsuaga-Rios Seppo Heikkila Dirk Duellmann Rene Meusel Jakob Blomer Ben Couturier
Using S3 cloud storage with ROOT and CernVMFS Maria Arsuaga-Rios Seppo Heikkila Dirk Duellmann Rene Meusel Jakob Blomer Ben Couturier INDEX Huawei cloud storages at CERN Old vs. new Huawei UDS comparative
Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb
Batch and Cloud overview Andrew McNab University of Manchester GridPP and LHCb Overview Assumptions Batch systems The Grid Pilot Frameworks DIRAC Virtual Machines Vac Vcycle Tier-2 Evolution Containers
PoS(EGICF12-EMITC2)110
User-centric monitoring of the analysis and production activities within the ATLAS and CMS Virtual Organisations using the Experiment Dashboard system Julia Andreeva E-mail: [email protected] Mattia
Operation and Performance of the CMS Silicon Tracker
Operation and Performance of the CMS Silicon Tracker Manfred Krammer 1 on behalf of the CMS Tracker Collaboration Institute of High Energy Physics, Austrian Academy of Sciences, Vienna, Austria Abstract.
Tier0 plans and security and backup policy proposals
Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle
An Integrated CyberSecurity Approach for HEP Grids. Workshop Report. http://hpcrd.lbl.gov/hepcybersecurity/
An Integrated CyberSecurity Approach for HEP Grids Workshop Report http://hpcrd.lbl.gov/hepcybersecurity/ 1. Introduction The CMS and ATLAS experiments at the Large Hadron Collider (LHC) being built at
Silicon Sensors for CMS Tracker at High-Luminosity Environment - Challenges in particle detection -
[email protected] Finnish Society for Natural Philosophy, Helsinki, 17 February 2015 Silicon Sensors for CMS Tracker at High-Luminosity Environment - Challenges in particle detection - Timo Peltola
Track Trigger and Modules For the HLT
CMS L1 Track Trigger for SLHC Anders Ryd for the CMS Track Trigger Task Force Vertex 2009 Sept. 13-18, 2009 L=1035 cm-2s-1 Outline: SLHC trigger challenge Tracking triggers Track trigger modules Simulation
Leveraging OpenStack Private Clouds
Leveraging OpenStack Private Clouds Robert Ronan Sr. Cloud Solutions Architect! [email protected]! LEVERAGING OPENSTACK - AGENDA OpenStack What is it? Benefits Leveraging OpenStack University
Grid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
A Simulation Model for Grid Scheduling Analysis and Optimization
A Simulation Model for Grid Scheduling Analysis and Optimization Florin Pop Ciprian Dobre Gavril Godza Valentin Cristea Computer Science Departament, University Politehnica of Bucharest, Romania {florinpop,
HEP computing and Grid computing & Big Data
May 11 th 2014 CC visit: Uni Trieste and Uni Udine HEP computing and Grid computing & Big Data CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Massimo Lamanna/CERN IT department - Data
Testing the In-Memory Column Store for in-database physics analysis. Dr. Maaike Limper
Testing the In-Memory Column Store for in-database physics analysis Dr. Maaike Limper About CERN CERN - European Laboratory for Particle Physics Support the research activities of 10 000 scientists from
From Distributed Computing to Distributed Artificial Intelligence
From Distributed Computing to Distributed Artificial Intelligence Dr. Christos Filippidis, NCSR Demokritos Dr. George Giannakopoulos, NCSR Demokritos Big Data and the Fourth Paradigm The two dominant paradigms
How To Teach Physics At The Lhc
LHC discoveries and Particle Physics Concepts for Education Farid Ould- Saada, University of Oslo On behalf of IPPOG EPS- HEP, Vienna, 25.07.2015 A successful program LHC data are successfully deployed
Grid Initiative Othmane Bouhali
MaGrid: The Moroccan Grid Initiative Université Libre de Bruxelles Brussels, Belgium Outline! The Moroccan Grid Initiative, MaGrid! The EumedGrid project! Collaboration/Research activities! Future Grid
HADOOP, a newly emerged Java-based software framework, Hadoop Distributed File System for the Grid
Hadoop Distributed File System for the Grid Garhan Attebury, Andrew Baranovski, Ken Bloom, Brian Bockelman, Dorian Kcira, James Letts, Tanya Levshina, Carl Lundestedt, Terrence Martin, Will Maier, Haifeng
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
Jet Reconstruction in CMS using Charged Tracks only
Jet Reconstruction in CMS using Charged Tracks only Andreas Hinzmann for the CMS Collaboration JET2010 12 Aug 2010 Jet Reconstruction in CMS Calorimeter Jets clustered from calorimeter towers independent
Big Data Analytics. for the Exploitation of the CERN Accelerator Complex. Antonio Romero Marín
Big Data Analytics for the Exploitation of the CERN Accelerator Complex Antonio Romero Marín Milan 11/03/2015 Oracle Big Data and Analytics @ Work 1 What is CERN CERN - European Laboratory for Particle
Chapter 12 Distributed Storage
Chapter 12 Distributed Storage 1 2 Files File location and addressing What is a file? Normally we collapse. Concepts: name; contents; gui. What about the backup of this file? How do we distinguish? File
High Availability Databases based on Oracle 10g RAC on Linux
High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database
Scientific Cloud Computing Infrastructure for Europe Strategic Plan. Bob Jones,
Scientific Cloud Computing Infrastructure for Europe Strategic Plan Bob Jones, IT department, CERN Origin of the initiative Conceived by ESA as a prospective for providing cloud services to space sector
Linux and the Higgs Particle
Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004 Outline What is CERN The Physics The Physics Tools The Accelerator The
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment
Cloud Computing and Amazon Web Services
Cloud Computing and Amazon Web Services Gary A. McGilvary edinburgh data.intensive research 1 OUTLINE 1. An Overview of Cloud Computing 2. Amazon Web Services 3. Amazon EC2 Tutorial 4. Conclusions 2 CLOUD
Data Intensive Science Education
Data Intensive Science Education Thomas J. Hacker Associate Professor, Computer & Information Technology Purdue University, West Lafayette, Indiana USA Gjesteprofessor (Visiting Professor), Department
Managed Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team
Managed Storage @ GRID or why NFSv4.1 is not enough Tigran Mkrtchyan for dcache Team What the hell do physicists do? Physicist are hackers they just want to know how things works. In moder physics given
Performance monitoring at CERN openlab. July 20 th 2012 Andrzej Nowak, CERN openlab
Performance monitoring at CERN openlab July 20 th 2012 Andrzej Nowak, CERN openlab Data flow Reconstruction Selection and reconstruction Online triggering and filtering in detectors Raw Data (100%) Event
Theoretical Particle Physics FYTN04: Oral Exam Questions, version ht15
Theoretical Particle Physics FYTN04: Oral Exam Questions, version ht15 Examples of The questions are roughly ordered by chapter but are often connected across the different chapters. Ordering is as in
The CMS All Silicon Tracker
The CMS All Silicon Tracker A Detector for the Exploration of the Terascale Lutz Feld 1. Physikalisches Institut, RWTH Aachen Göttingen, 25. 1. 2008 Large Hadron Collider at CERN proton proton quarks &
Data analysis in Par,cle Physics
Data analysis in Par,cle Physics From data taking to discovery Tuesday, 13 August 2013 Lukasz Kreczko - Bristol IT MegaMeet 1 $ whoami Lukasz (Luke) Kreczko Par,cle Physicist Graduated in Physics from
Large Hadron Collider am CERN
The CMS Silicon Tracker Lutz Feld 1. Physikalisches Institut, RWTH Aachen GSI Darmstadt, 18. 4. 2007 Large Hadron Collider am CERN proton proton quarks & gluons circumference 27 km 1200 superconducting
Techniques for implementing & running robust and reliable DB-centric Grid Applications
Techniques for implementing & running robust and reliable DB-centric Grid Applications International Symposium on Grid Computing 2008 11 April 2008 Miguel Anjo, CERN - Physics Databases Outline Robust
New Jersey Big Data Alliance
Rutgers Discovery Informatics Institute (RDI 2 ) New Jersey s Center for Advanced Computation New Jersey Big Data Alliance Manish Parashar Director, Rutgers Discovery Informatics Institute (RDI 2 ) Professor,
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1
What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1 SouthGrid in numbers CPU [cores] RAM [TB] Disk [TB] Manpower [FTE] Power [kw] 5100 10.2 3000 7 1.5 x
UNIVERSITY OF SPLIT FACULTY OF ELECTRICAL ENGINEERING, MECHANICAL ENGINNERING AND NAVAL ARHITECTURE (FESB) PRESENTATION. Prof.
UNIVERSITY OF SPLIT FACULTY OF ELECTRICAL ENGINEERING, MECHANICAL ENGINNERING AND NAVAL ARHITECTURE () PRESENTATION Prof. Ivica Puljak Cavtat, September 3 rd 2007 1960 2007 Founded 1960 1980 New building
Precision Tracking Test Beams at the DESY-II Synchrotron. Simon Spannagel DPG 2014 T88.7 Mainz, 26.3.2014
Precision Tracking Test Beams at the DESY-II Synchrotron. Simon Spannagel DPG 2014 T88.7 Mainz, 26.3.2014 Overview > Test Beams at DESY-II > Tracking with the DATURA Telescope Telescope Hardware Software
ALICE GRID & Kolkata Tier-2
ALICE GRID & Kolkata Tier-2 Site Name :- IN-DAE-VECC-01 & IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :- INDIA Vikas Singhal VECC, Kolkata Events at LHC Luminosity : 10 34 cm -2 s -1 40 MHz every
HEP GROUP UNIANDES - COLOMBIA
HEP GROUP UNIANDES - COLOMBIA Carlos Avila On behalf of the group January 8th 2014 C. Avila, UNIANDES 1 LATIN AMERICAN COUNTRIES IN LHC EXPERIMENTS Uniandes is a CMS collaborating Institution since March
Web based monitoring in the CMS experiment at CERN
FERMILAB-CONF-11-765-CMS-PPD International Conference on Computing in High Energy and Nuclear Physics (CHEP 2010) IOP Publishing Web based monitoring in the CMS experiment at CERN William Badgett 1, Irakli
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,
Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
From raw data to Pbytes on disk The world wide LHC Computing Grid
The world wide LHC Computing Grid HAP Workshop Bad Liebenzell, Dark Universe Nov. 22nd 2012 1 KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association
Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing
Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing Garhan Attebury 1, Andrew Baranovski 2, Ken Bloom 1, Brian Bockelman 1, Dorian Kcira 3, James Letts 4, Tanya Levshina 2,
ODIS 2 A database on intermediary structures in Europe, 19th & 20th centuries
ODIS 2 A database on intermediary structures in Europe, 19th & 20th centuries Joris Colla and Stephan Pauls Launch event ODIS 2 Brussels, Flemish Parliament, 29th November 2013 Contents 1. From ODIS 1
arxiv:1402.0675v1 [physics.ins-det] 4 Feb 2014
Preprint typeset in JINST style - HYPER VERSION Operation and performance of the CMS tracker arxiv:1402.0675v1 [physics.ins-det] 4 Feb 2014 Viktor Veszpremi for the CMS Collaboration a a Wigner Research
Cloud Computing & Sizing Servers
Cloud Computing & Sizing Servers CIO Café 24 Maart 2011 Ing. Johan De Gelas, Head of Server Research - [email protected] Sizing Servers Lab? Howest Onderzoeksgroep rond: Verbeteren performance server
Database Services for Physics @ CERN
Database Services for Physics @ CERN Deployment and Monitoring Radovan Chytracek CERN IT Department Outline Database services for physics Status today How we do the services tomorrow? Performance tuning
