CDFII Computing Status
|
|
|
- Dustin Fox
- 10 years ago
- Views:
Transcription
1 CDFII Computing Status OUTLINE: New CDF-Italy computing group organization Usage status at FNAL and CNAF Towards GRID: where we are Plans and requests 22/04/2005 Donatella Lucchesi 1
2 CDFII Computing group changes S. Belforte keeps for the moment 20% (1 day/week) on CDF: necessary to guarantee the transition toward a new responsabile nazionale The CDF-Italia projects for software and data handling are mature and well under control : transition from our machines at FNAL to our farm at CNAF completed with success our CNAF farm merging into the Tier1 common pool almost completed new software infrastructure for CAF re-written by Igor Sfiligoi co-head Computing and Grid for CDF at FNAL Presented by Stefano for CMS 22/04/2005 Donatella Lucchesi 2
3 CDFII Computing group changes Presented by Stefano for CMS Transition toward GRID started and very promising Our budget ~0 Euro since two years New people arrived in the latest 12 months and new very important role of Igor Sfiligoi: Donatella Lucchesi (art. 23 Padova) Francesco Delli Paoli (borsa tecn. INFN Padova) Armando Fella (borsa tecn. INFN Pisa) Subir Sarkar (A.R. tecn. Tier1 CNAF) Daniel Jeans (A.R. tecn. INFNGRID Roma) New group organization 22/04/2005 Donatella Lucchesi 3
4 CDFII-Italy Computing before Presented by Stefano for CMS Planning, management, committees, talks Code distribution in Italy data replica, access, management in Italy Stefano Belforte Factotum with helpers Francesco Delli Paoli Donatella Lucchesi Large dataset management, split, skim, concatenation Armando Fella HW resources purchase and management in Italy and USA Supervision, management, operation of CNAF farm Subir Sarkar 22/04/2005 Donatella Lucchesi 4
5 CDFII-Italy Computing group Presented by Stefano Stefano Belforte coordination Igor Sfiligoi Donatella Lucchesi Francesco DelliPaoli Armando Fella Subir Sarkar CAF and GRID for CDF Daniel Jeans Needs assessment, interface with CSN1, Tier1 and GRID, HW resources management in Italy and USA Code distribution in Italy data replica, access and management in Italy Large dataset management, split, skim, concatenation Porting of CAF to LCG Management and operation of CNAF farm, migration to Tier1 common pool 22/04/2005 Donatella Lucchesi 5
6 CAF status at Fnal (latest 6 months) CAF reserved queue ~300 GHz All slots used, relative high demand before analysis approvals for conferences Italian Users on CondorCaf 22/04/2005 Donatella Lucchesi 6
7 CDF al CNAF 2004: From 8 to 80 boxes. From FBSNG to Condor Xmas CPU Satura! December: + 22TB disk data import ~200Mb/s Fair share: 800 users, 54 active last month NFS stinks, GPFS ok RFIO ok Condor great Glide-In in progress Stefano Belforte INFN Trieste Donatella Lucchesi 22/04/2005 7
8 Towards GRID: Glide-in Presented by Igor April 15 Just a small change over the production Condor based CAF Use glide-ins to extend the Condor pool A dedicated head node must be installed and maintained near the Computing Element CDF user policy in our hands Use dedicated storage In beta testing stage now at CNAF Test users have access to 500KSI2K of CPU Work done by Subir Sarkar and Igor Sfiligoi 22/04/2005 Donatella Lucchesi 8
9 Towards GRID: Glide-in CDF users are submitting jobs via GlideCaf to common (farm LSF) We have access to 1.1 THz in principle Need to define CDF dedicated resources when LHC Data Challenge will be in 22/04/2005 Donatella Lucchesi 9
10 Everything works: Bottom Lines FNAL and CNAF OK CNAF OK, usage ~80% CAF CNAF to pool comune LSF OK LCG portal OK Users happy, analysis not limited by resources Impossible to define now if and when we will have crisis Compare CDF 2008/9 with Tier1 =resources pledged to INFN for as RRB of LCG of April CDF cost is low 2005/2006 covered by what already approved future next slides 22/04/2005 Donatella Lucchesi 10
11 Disk Space CDF produces ~100 TB of data per year ~20% comes to CNAF 20TB per year Same amount of disk space considered for Monte Carlo Ntuples and reduced datasets homemade End of 2005 CDF has 90TB (80 from CNAF) Small expansion: +40TB/year CDF is a small fraction of CNAF/Tier1 program Years of reference: 2008/9 CDF = 4~5% of Tier Year TB CDF TB T1 %CDF % % % % % % 22/04/2005 Donatella Lucchesi 11
12 CPU: Assisi CDF requests Scenario presented in 2004 (3 times) based on CDF official expectations presented by Rick Snider and not changed as today CPU for CDF-Italy users: linear increase as CPU for CDF: 15% of total needs as presented In the reference year 2008/9 CDF ask for CPU as Atlas or CMS Not crazy, but difficult to make now the case Today proposal: we will ask so much when and if we need 22/04/2005 Donatella Lucchesi 12
13 CPU: Assisi CDF requests Year CDF (KSI2K) T1 (ksi2k) %CDF % % % % % % 22/04/2005 Donatella Lucchesi 13
14 CPU: exercise crescita zero Assuming, as exercise, that starting from 2006 CDF does not buy any CPU we have 900 KSI2K In the reference year 2008/9 CDF is 12% of Tier 1 a bit less of ~1/3 of Atlas o CMS Not surprising, CDF will have ~10PB of data CDF will not be insignificant in any case 22/04/2005 Donatella Lucchesi 14
15 CPU: exercise crescita zero Year CDF (KSI2K) 86 T1 (ksi2k) %CDF % % % % % % 22/04/2005 Donatella Lucchesi 15
16 CPU: in medio stat virtus But, CDF does not disappear A small increase per year is needed: luminosity will increase and data as well to have something to count on in case GRID or FNAL are too loaded give the possibility to redefine each year the increase (positive or negative) Assumption: increase of 300 KSI2K/year after 2006 In the reference year 2008/9 CDF is 15% of Tier 1 to be compared with 12% if a crescita zero 22/04/2005 Donatella Lucchesi 16
17 CPU: in medio stat virtus Year CDF (KSI2K) 86 T1 (KSI2K) %CDF % % % % % % 22/04/2005 Donatella Lucchesi 17
18 Summary CDF-Italia has a group of people working on Computing and Data Handling which has been reorganized to cope with Stefano leave CAF@CNAF is highly used Several datasets and used for analysis Big improvements towards GRID: o GlideCaf used to produce Monte Carlo o working hard to have LCG based CAF Requests made in Assisi still valid for disks and CPU Different scenario also presented to have possibility to discuss each year our needs based our and LCH usage 22/04/2005 Donatella Lucchesi 18
19 Backup 22/04/2005 Donatella Lucchesi 19
20 Backup 22/04/2005 Donatella Lucchesi 20
21 22/04/2005 Donatella Lucchesi 21
22 Assisi requests CDF Analysis hardware plan CDF ANALYSIS NEEDS 15% Year GHz TB K$ GHz TB K$ Roadmap for CNAF CDF FARM AT CNAF for INFN for CDF grid CNAF physicists Year GHz TB 30% of our CPU GHz to add GRID GHz tot GHz for CDF Notes "for INFN" already payed discuss in Assisi discuss in /04/2005 Donatella Lucchesi 22
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
Mass Storage System for Disk and Tape resources at the Tier1.
Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage [email protected] ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk
An objective comparison test of workload management systems
An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: [email protected] Abstract. The Grid
The Grid-it: the Italian Grid Production infrastructure
n 1 Maria Cristina Vistoli INFN CNAF, Bologna Italy The Grid-it: the Italian Grid Production infrastructure INFN-Grid goals!promote computational grid technologies research & development: Middleware and
A Physics Approach to Big Data. Adam Kocoloski, PhD CTO Cloudant
A Physics Approach to Big Data Adam Kocoloski, PhD CTO Cloudant 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 Solenoidal Tracker at RHIC (STAR) The life of LHC data Detected by experiment Online
Interoperating Cloud-based Virtual Farms
Stefano Bagnasco, Domenico Elia, Grazia Luparello, Stefano Piano, Sara Vallero, Massimo Venaruzzo For the STOA-LHC Project Interoperating Cloud-based Virtual Farms The STOA-LHC project 1 Improve the robustness
MSU Tier 3 Usage and Troubleshooting. James Koll
MSU Tier 3 Usage and Troubleshooting James Koll Overview Dedicated computing for MSU ATLAS members Flexible user environment ~500 job slots of various configurations ~150 TB disk space 2 Condor commands
U-LITE Network Infrastructure
U-LITE: a proposal for scientific computing at LNGS S. Parlati, P. Spinnato, S. Stalio LNGS 13 Sep. 2011 20 years of Scientific Computing at LNGS Early 90s: highly centralized structure based on VMS cluster
Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
HIP Computing Resources for LHC-startup
HIP Computing Resources for LHC-startup Tomas Lindén Finnish CMS meeting in Kumpula 03.10. 2007 Kumpula, Helsinki October 3, 2007 1 Tomas Lindén Contents 1. Finnish Tier-1/2 computing in 2007 and 2008
The Data Quality Monitoring Software for the CMS experiment at the LHC
The Data Quality Monitoring Software for the CMS experiment at the LHC On behalf of the CMS Collaboration Marco Rovere, CERN CHEP 2015 Evolution of Software and Computing for Experiments Okinawa, Japan,
Data storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
GRID workload management system and CMS fall production. Massimo Sgaravatto INFN Padova
GRID workload management system and CMS fall production Massimo Sgaravatto INFN Padova What do we want to implement (simplified design) Master chooses in which resources the jobs must be submitted Condor-G
(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015
(Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:
Tier0 plans and security and backup policy proposals
Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle
Computing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
PES. Batch virtualization and Cloud computing. Part 1: Batch virtualization. Batch virtualization and Cloud computing
Batch virtualization and Cloud computing Batch virtualization and Cloud computing Part 1: Batch virtualization Tony Cass, Sebastien Goasguen, Belmiro Moreira, Ewan Roche, Ulrich Schwickerath, Romain Wartel
Solution for private cloud computing
The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details Use cases By scientist By HEP experiment System requirements and installation How to get it? 2 What
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment
CMS Tier-3 cluster at NISER. Dr. Tania Moulik
CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend
HTCondor at the RAL Tier-1
HTCondor at the RAL Tier-1 Andrew Lahiff, Alastair Dewhurst, John Kelly, Ian Collier, James Adams STFC Rutherford Appleton Laboratory HTCondor Week 2014 Outline Overview of HTCondor at RAL Monitoring Multi-core
Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing
Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing Garhan Attebury 1, Andrew Baranovski 2, Ken Bloom 1, Brian Bockelman 1, Dorian Kcira 3, James Letts 4, Tanya Levshina 2,
Managed Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team
Managed Storage @ GRID or why NFSv4.1 is not enough Tigran Mkrtchyan for dcache Team What the hell do physicists do? Physicist are hackers they just want to know how things works. In moder physics given
LHC schedule: what does it imply for SRM deployment? [email protected]. CERN, July 2007
WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? [email protected] WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.
What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1
What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1 SouthGrid in numbers CPU [cores] RAM [TB] Disk [TB] Manpower [FTE] Power [kw] 5100 10.2 3000 7 1.5 x
HappyFace for CMS Tier-1 local job monitoring
HappyFace for CMS Tier-1 local job monitoring G. Quast, A. Scheurer, M. Zvada CMS Offline & Computing Week CERN, April 4 8, 2011 INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK, KIT 1 KIT University of the State
Report on WorkLoad Management activities
APROM CERN, Monday 26 November 2004 Report on WorkLoad Management activities Federica Fanzago, Marco Corvo, Stefano Lacaprara, Nicola de Filippis, Alessandra Fanfani [email protected] INFN Padova,
Virtualization, Grid, Cloud: Integration Paths for Scientific Computing
Virtualization, Grid, Cloud: Integration Paths for Scientific Computing Or, where and how will my efficient large-scale computing applications be executed? D. Salomoni, INFN Tier-1 Computing Manager [email protected]
DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group
DSS High performance storage pools for LHC Łukasz Janyst on behalf of the CERN IT-DSS group CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Introduction The goal of EOS is to provide a
Virtualisation Cloud Computing at the RAL Tier 1. Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013
Virtualisation Cloud Computing at the RAL Tier 1 Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013 Virtualisation @ RAL Context at RAL Hyper-V Services Platform Scientific Computing Department
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version The Particle Physics Experiment Consolidated Grant proposals now being submitted
The GRID and the Linux Farm at the RCF
The GRID and the Linux Farm at the RCF A. Chan, R. Hogue, C. Hollowell, O. Rind, J. Smith, T. Throwe, T. Wlodek, D. Yu Brookhaven National Laboratory, NY 11973, USA The emergence of the GRID architecture
Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL
Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive
Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb
Batch and Cloud overview Andrew McNab University of Manchester GridPP and LHCb Overview Assumptions Batch systems The Grid Pilot Frameworks DIRAC Virtual Machines Vac Vcycle Tier-2 Evolution Containers
Service Challenge Tests of the LCG Grid
Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources
The ENEA-EGEE site: Access to non-standard platforms
V INFNGrid Workshop Padova, Italy December 18-20 2006 The ENEA-EGEE site: Access to non-standard platforms C. Sciò**, G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, F. Simoni, S. Podda
Clusters in the Cloud
Clusters in the Cloud Dr. Paul Coddington, Deputy Director Dr. Shunde Zhang, Compu:ng Specialist eresearch SA October 2014 Use Cases Make the cloud easier to use for compute jobs Par:cularly for users
UW-IT Backups & Archives
UW-IT Backups & Archives Powerful, Flexible, Affordable UW-IT TechTalk February 19, 2015 Agenda Definitions Yesterday Today Tomorrow Your thoughts Backups Defined Data is hot Primary data copy is on first-tier
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
Running a typical ROOT HEP analysis on Hadoop/MapReduce. Stefano Alberto Russo Michele Pinamonti Marina Cobal
Running a typical ROOT HEP analysis on Hadoop/MapReduce Stefano Alberto Russo Michele Pinamonti Marina Cobal CHEP 2013 Amsterdam 14-18/10/2013 Topics The Hadoop/MapReduce model Hadoop and High Energy Physics
OIS. Update on Windows 7 at CERN & Remote Desktop Gateway. Operating Systems & Information Services CERN IT-OIS
Operating Systems & Information Services Update on Windows 7 at CERN & Remote Desktop Gateway CERN IT-OIS Tim Bell, Michal Kwiatek, Michal Budzowski, Andreas Wagner HEPiX Fall 2010 Workshop 4th November
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
HADOOP, a newly emerged Java-based software framework, Hadoop Distributed File System for the Grid
Hadoop Distributed File System for the Grid Garhan Attebury, Andrew Baranovski, Ken Bloom, Brian Bockelman, Dorian Kcira, James Letts, Tanya Levshina, Carl Lundestedt, Terrence Martin, Will Maier, Haifeng
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
GridKa: Roles and Status
GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre
Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept
Integration of Virtualized Workernodes in Batch Queueing Systems, Dr. Armin Scheurer, Oliver Oberst, Prof. Günter Quast INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK FAKULTÄT FÜR PHYSIK KIT University of the
Grid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
Heterogeneous Database Replication Gianni Pucciani
LCG Database Deployment and Persistency Workshop CERN 17-19 October 2005 Heterogeneous Database Replication Gianni Pucciani A.Domenici [email protected] F.Donno [email protected] L.Iannone
Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar
Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working
DCMS Tier 2/3 prototype infrastructure
DCMS Tier 2/3 prototype infrastructure 1 Anja Vest, Uni Karlsruhe DCMS Meeting in Aachen, Overview LCG Queues/mapping set up Hardware capacities Supported software Summary DCMS overview CMS DCMS: Tier
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction
dcache Introduction Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. http://www.gridka.de What is dcache? Developed at DESY and FNAL Disk
Computing at the HL-LHC
Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,
Scientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015 Index - Storage use cases - Bluearc - Lustre - EOS - dcache disk only - dcache+enstore Data distribution by solution
