Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer. Frank Würthwein Rick Wagner August 5th, 2013
|
|
|
- Byron Pitts
- 10 years ago
- Views:
Transcription
1 Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer Frank Würthwein Rick Wagner August 5th, 2013
2 The Universe is a strange place! 67% of energy is dark energy We got no clue what this is. 29% of matter is dark matter We have some ideas but no proof of what this is! All of what we know makes up Only about 4% of the universe.
3 Fkw s Research is focused on Higgs and Dark Matter. We have delivered the Higgs. Now it s time to search for Dark Matter 3
4 Experimental Particle Physics: The Big bang in the laboratory We gain insight by colliding particles at the highest energies possible to measure: Production rates Masses & lifetimes Decay rates From this we derive the spectroscopy as well as the dynamics of elementary particles. Progress is made by going to higher energies and/or brighter beams. Higher Energies gets us closer to the big bang Brighter beams allows the study of rare phenomena
5 To study Dark Matter we need to create it in the laboratory CMS Lake Geneva CERN
6 The Large Hadron Collider
7 The CMS Experiment
8 The CMS Experiment 80 Million electronic channels x 4 bytes x 40MHz ~ 10 Petabytes/sec of information x 1/1000 zero-suppression x 1/100,000 online event filtering ~ Megabytes/sec raw data to tape 1 to 10 Petabytes of raw data per year 2000 Scientists (1200 Ph.D. in physics) ~ 180 Institutions ~ 40 countries 12,500 tons, 21m long, 16m diameter
9 The Challenge How do we organize the processing of 10 s to 1000 s of Petabytes of data by a globally distributed community of scientists, and do so with manageable change costs for the next 20 years? Solution to the Challenge Chose technical solutions that allow computing resources as distributed as human resources. Support distributed ownership and control, within a global single-sign on security context. Design for heterogeneity and adaptability.
10 CMS global processing infrastructure Depends on Federation of Regional Infrastructures Tier-1: Archive & Primary processing Tier-2: Simulation & Science Data Analysis
11 The Open Science Grid A Consortium of Universities and National Labs to share resources and technologies to advance Science Open for all of science, including biology, chemistry, computer science, engineering, mathematics, medicine, and physics. Backbone of CMS processing in the US.
12 Vision going forward Implemented vision for 1 st time in Spring 2013 using Gordon Supercomputer at SDSC.
13 Using Gordon to Accelerate LHC Science
14 Contributors Brian Bockelman (UNL) Igor Sfiligoi (UCSD) Matevz Tadel (UCSD) James Letts (UCSD) Frank Würthwein (UCSD) Lothar A. Bauerdick (FNAL) Rick Wagner Mahidhar Tatineni Eva Hocks Kenneth Yoshimoto Scott Sakai Michael L. Norman
15 When Grids Collide
16 Overview 2012 LHC data collection rates higher than first planned (1000Hz vs. 150Hz) Additional data was parked to be reduced during 2 year shutdown Delays the science from data at the end
17 Linking the Grids CMS Components CMSSW: Base software components, NFS exported from IO node OSG worker node client: CA certs, CRLs Squid proxy: cache calibration data needed for each job, running on IO node glideinwms: worker node manager pulls down CMS jobs BOSCO: GSI-SSH capable batch job submission tool PhEDEx: data transfer management GSI Authentication GridFTP SSH A lot of shared knowledge Common Tools and Connectors
18
19 Results Work completed in February to March million collision events 125TB in, ~150 TB out ~2 million SUs Good experience regarding OSG-XSEDE compatibility
CMS Tier-3 cluster at NISER. Dr. Tania Moulik
CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend
Testing the In-Memory Column Store for in-database physics analysis. Dr. Maaike Limper
Testing the In-Memory Column Store for in-database physics analysis Dr. Maaike Limper About CERN CERN - European Laboratory for Particle Physics Support the research activities of 10 000 scientists from
HADOOP, a newly emerged Java-based software framework, Hadoop Distributed File System for the Grid
Hadoop Distributed File System for the Grid Garhan Attebury, Andrew Baranovski, Ken Bloom, Brian Bockelman, Dorian Kcira, James Letts, Tanya Levshina, Carl Lundestedt, Terrence Martin, Will Maier, Haifeng
Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing
Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing Garhan Attebury 1, Andrew Baranovski 2, Ken Bloom 1, Brian Bockelman 1, Dorian Kcira 3, James Letts 4, Tanya Levshina 2,
(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015
(Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:
GRID computing at LHC Science without Borders
GRID computing at LHC Science without Borders Kajari Mazumdar Department of High Energy Physics Tata Institute of Fundamental Research, Mumbai. Disclaimer: I am a physicist whose research field induces
Estonian Scientific Computing Infrastructure (ETAIS)
Estonian Scientific Computing Infrastructure (ETAIS) Week #7 Hardi Teder [email protected] University of Tartu March 27th 2013 Overview Estonian Scientific Computing Infrastructure Estonian Research infrastructures
Grid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
Data analysis in Par,cle Physics
Data analysis in Par,cle Physics From data taking to discovery Tuesday, 13 August 2013 Lukasz Kreczko - Bristol IT MegaMeet 1 $ whoami Lukasz (Luke) Kreczko Par,cle Physicist Graduated in Physics from
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
PACE Predictive Analytics Center of Excellence @ San Diego Supercomputer Center, UCSD. Natasha Balac, Ph.D.
PACE Predictive Analytics Center of Excellence @ San Diego Supercomputer Center, UCSD Natasha Balac, Ph.D. Brief History of SDSC 1985-1997: NSF national supercomputer center; managed by General Atomics
Big Data Analytics. for the Exploitation of the CERN Accelerator Complex. Antonio Romero Marín
Big Data Analytics for the Exploitation of the CERN Accelerator Complex Antonio Romero Marín Milan 11/03/2015 Oracle Big Data and Analytics @ Work 1 What is CERN CERN - European Laboratory for Particle
Data Requirements from NERSC Requirements Reviews
Data Requirements from NERSC Requirements Reviews Richard Gerber and Katherine Yelick Lawrence Berkeley National Laboratory Summary Department of Energy Scientists represented by the NERSC user community
Linux and the Higgs Particle
Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004 Outline What is CERN The Physics The Physics Tools The Accelerator The
Data And Software Preservation for Open Science (DASPOS)
Data And Software Preservation for Open Science (DASPOS) For the past few years, the worldwide High Energy Physics (HEP) Community has been developing the background principles and foundations for a community-wide
HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS EKSPERYMENTY FIZYKI WYSOKICH ENERGII W SIECIACH KOMPUTEROWYCH GRID. 1.
Computer Science Vol. 9 2008 Andrzej Olszewski HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS The demand for computing resources used for detector simulations and data analysis in High Energy
FCC 1309180800 JGU WBS_v0034.xlsm
1 Accelerators 1.1 Hadron injectors 1.1.1 Overall design parameters 1.1.1.1 Performance and gap of existing injector chain 1.1.1.2 Performance and gap of existing injector chain 1.1.1.3 Baseline parameters
A Physics Approach to Big Data. Adam Kocoloski, PhD CTO Cloudant
A Physics Approach to Big Data Adam Kocoloski, PhD CTO Cloudant 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 Solenoidal Tracker at RHIC (STAR) The life of LHC data Detected by experiment Online
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,
New Jersey Big Data Alliance
Rutgers Discovery Informatics Institute (RDI 2 ) New Jersey s Center for Advanced Computation New Jersey Big Data Alliance Manish Parashar Director, Rutgers Discovery Informatics Institute (RDI 2 ) Professor,
The supercomputer for particle physics at the ULB-VUB computing center
The supercomputer for particle physics at the ULB-VUB computing center P. Vanlaer Université Libre de Bruxelles Interuniversity Institute for High Energies (ULB-VUB) Tier-2 cluster inauguration ULB, May
An objective comparison test of workload management systems
An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: [email protected] Abstract. The Grid
PRAXIS Pass Rates Fall 2010 through Spring 2013
PRAXIS Pass Rates Fall 2010 through Spring 2013 Program Semester Test # N Percent Comments BS Elementary Education Fall 2010 0710 4 100% 1 was ACT PRAXIS exempt BS Elementary Education Fall 2010 0172 4
Leveraging OpenStack Private Clouds
Leveraging OpenStack Private Clouds Robert Ronan Sr. Cloud Solutions Architect! [email protected]! LEVERAGING OPENSTACK - AGENDA OpenStack What is it? Benefits Leveraging OpenStack University
Michael Thomas, Dorian Kcira California Institute of Technology. CMS Offline & Computing Week
Michael Thomas, Dorian Kcira California Institute of Technology CMS Offline & Computing Week San Diego, April 20-24 th 2009 Map-Reduce plus the HDFS filesystem implemented in java Map-Reduce is a highly
Big Data Needs High Energy Physics especially the LHC. Richard P Mount SLAC National Accelerator Laboratory June 27, 2013
Big Data Needs High Energy Physics especially the LHC Richard P Mount SLAC National Accelerator Laboratory June 27, 2013 Why so much data? Our universe seems to be governed by nondeterministic physics
Abderrahman El Kharrim
Curriculum Vitae EPIKH administrators are allowed to treat all the data below Abderrahman El Kharrim Contribution to scientific and technical research projects Date and place of birth : Nationality : Personal
THE CCLRC DATA PORTAL
THE CCLRC DATA PORTAL Glen Drinkwater, Shoaib Sufi CCLRC Daresbury Laboratory, Daresbury, Warrington, Cheshire, WA4 4AD, UK. E-mail: [email protected], [email protected] Abstract: The project aims
How To Teach Physics At The Lhc
LHC discoveries and Particle Physics Concepts for Education Farid Ould- Saada, University of Oslo On behalf of IPPOG EPS- HEP, Vienna, 25.07.2015 A successful program LHC data are successfully deployed
The Data Quality Monitoring Software for the CMS experiment at the LHC
The Data Quality Monitoring Software for the CMS experiment at the LHC On behalf of the CMS Collaboration Marco Rovere, CERN CHEP 2015 Evolution of Software and Computing for Experiments Okinawa, Japan,
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment
Data Management using irods
Data Management using irods Fundamentals of Data Management September 2014 Albert Heyrovsky Applications Developer, EPCC [email protected] 2 Course outline Why talk about irods? What is irods?
Network & HEP Computing in China. Gongxing SUN CJK Workshop & CFI
Network & HEP Computing in China Gongxing SUN CJK Workshop & CFI Outlines IPV6 deployment SDN for HEP data transfer Dirac Computing Model on IPV6 Volunteer Computing Future Work IPv6@IHEP-Deployment Internet
U-LITE Network Infrastructure
U-LITE: a proposal for scientific computing at LNGS S. Parlati, P. Spinnato, S. Stalio LNGS 13 Sep. 2011 20 years of Scientific Computing at LNGS Early 90s: highly centralized structure based on VMS cluster
Status and Evolution of ATLAS Workload Management System PanDA
Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The
Human Brain Project -
Human Brain Project - Scientific goals, Organization, Our role Wissenswerte, Bremen 26. Nov 2013 Prof. Sonja Grün Insitute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulations (IAS-6)
From Distributed Computing to Distributed Artificial Intelligence
From Distributed Computing to Distributed Artificial Intelligence Dr. Christos Filippidis, NCSR Demokritos Dr. George Giannakopoulos, NCSR Demokritos Big Data and the Fourth Paradigm The two dominant paradigms
HEP GROUP UNIANDES - COLOMBIA
HEP GROUP UNIANDES - COLOMBIA Carlos Avila On behalf of the group January 8th 2014 C. Avila, UNIANDES 1 LATIN AMERICAN COUNTRIES IN LHC EXPERIMENTS Uniandes is a CMS collaborating Institution since March
SUPERCOMPUTING FACILITY INAUGURATED AT BARC
SUPERCOMPUTING FACILITY INAUGURATED AT BARC The Honourable Prime Minister of India, Dr Manmohan Singh, inaugurated a new Supercomputing Facility at Bhabha Atomic Research Centre, Mumbai, on November 15,
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend
Hadoop on HEPiX storage test bed at FZK Artem Trunov Karlsruhe Institute of Technology Karlsruhe, Germany KIT The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) www.kit.edu
Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept
Integration of Virtualized Workernodes in Batch Queueing Systems, Dr. Armin Scheurer, Oliver Oberst, Prof. Günter Quast INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK FAKULTÄT FÜR PHYSIK KIT University of the
Computing at the HL-LHC
Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction
dcache Introduction Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. http://www.gridka.de What is dcache? Developed at DESY and FNAL Disk
Data Movement and Storage. Drew Dolgert and previous contributors
Data Movement and Storage Drew Dolgert and previous contributors Data Intensive Computing Location Viewing Manipulation Storage Movement Sharing Interpretation $HOME $WORK $SCRATCH 72 is a Lot, Right?
Technical Case Study CERN the European Organization for Nuclear Research
Technical Case Study CERN the European Organization for Nuclear Research How CERN helps physicists unlock the secrets of the universe running critical operations on a foundation of Oracle databases on
Cloud Computing. Lecture 5 Grid Case Studies 2014-2015
Cloud Computing Lecture 5 Grid Case Studies 2014-2015 Up until now Introduction. Definition of Cloud Computing. Grid Computing: Schedulers Globus Toolkit Summary Grid Case Studies: Monitoring: TeraGRID
What Does It Take to Solve the Mysteries of the Universe?
What Does It Take to Solve the Mysteries of the Universe? Professor Lawrence Hall teaches the students of Physics 233A (a.k.a. The Standard Model of Physics) about the Higgs mechanism, or how the W particles
Big Data Processing Experience in the ATLAS Experiment
Big Data Processing Experience in the ATLAS Experiment A. on behalf of the ATLAS Collabora5on Interna5onal Symposium on Grids and Clouds (ISGC) 2014 March 23-28, 2014 Academia Sinica, Taipei, Taiwan Introduction
The DESY Big-Data Cloud System
The DESY Big-Data Cloud System Patrick Fuhrmann On behave of the project team The DESY BIG DATA Cloud Service Berlin Cloud Event Patrick Fuhrmann 5 May 2014 1 Content (on a good day) About DESY Project
Berkeley City College TEACH Associate of Arts for Transfer (AA-T) Program Handbook. 2050 Center St. Berkeley, CA 94704
Berkeley City College TEACH Associate of Arts for Transfer (AA-T) Program Handbook 2050 Center St. Berkeley, CA 94704 [email protected] http://www.berkeleycitycollege.edu http://www.berkeleycitycollege.edu/wp/elem-teacher-educ/
