LHCb activities at PIC
|
|
- Delphia Kelley
- 8 years ago
- Views:
Transcription
1 CCRC08 post-mortem LHCb activities at PIC G. Merino PIC, 19/06/2008
2 LHCb Computing Main user analysis supported at CERN + 6Tier-1s Tier-2s essentially MonteCarlo production facilities 2
3 CCRC08: Planned tasks May activities: Maintain equivalent of 1 month data taking assuming a 50% machine cycle efficiency i Raw data distribution from pit T0 centre Raw data distribution from T0 T1 centres Use of FTS - T1D0 Recons of raw data at CERN & T1 centres RAW (T1D0) rdst (T1D0) Stripping of data at CERN & T1 centres RAW & rdst (T1D0) DST (T1D1) Distribution of DST data to all other centres Use of FTS - T0D1 (except CERN T1D1) 3
4 Activities across the sites Planned breakdown of processing activities (CPU needs) prior to CCRC08 Site Fraction (%) CERN 14 FZK 11 IN2P3 25 CNAF 9 NIKHEF/SARA 26 PIC 4 RAL 11 4
5 Tier 0 Tier 1 FTS from CERN to Tier-1 centres Transfer of RAW will only occur once data has migrated to tape & checksum is verified Rate out of CERN ~35MB/s averaged over the period Peak rate far in excess of requirement In smooth running sites matched LHCb requirements 5
6 Tier 0 Tier 1 6
7 Tier 0 Tier 1 To first order all transfers eventually succeeded Issue with UK certificates plot shows efficiency on 1st attempt Restart IN2P3 SRM endpoint CERN outage CERN SRM endpoint problems 7
8 Reconstruction Used SRM 2.2 LHCb space tokens are: LHCb_RAW (T1D0); LHCb_RDST (T1D0) Data shares need to be preserved Important for resource planning Input 1 RAW file & output 1 rdst file (1.6 GB) Reduced nos of events per recons job from 50k to 25k (job ~12 hour duration on 2.8 ksi2k machine) In order to fit within the available queues Need to get queues at all sites that match our processing time Alternative: reduce file size! 8
9 Reconstruction After data transfer file should be online, as job submitted immediately NOTE: in principle only LHCb has this requirement of online reconstruction Reco jobs will read the input data from the T1D0 write buffer Just in case LHCb pre-stages files (srm_bringonline) & then checks on the status t of the file (srm_ls) before submitting pilot job via GFAL Pre-stage should ensure access availability from cache Only issue at NL-T1 with reporting of file status 9
10 Reconstruction 41.2k reconstruction jobs Sub Done Done/ jobs jobs Sub submitted 27.6k jobs proceeded to done state Done/created ~67% NIKHEF 10.3k (26%) PIC 1.8k (4%) 2.3k (6%) 1.6k (4%) 23% 89% RAL 4.7k 3.5k 74% (11%) (8%) CERN 6.1k (14%) 5.3k 86% (13%) CNAF 3.9k 2.8k (9%) (7%) GridKa 4.1k (11%) 3.1k (7%) 72% 76% IN2P3 10.3k 6.1k 56% (25%) (14%) 10
11 Reconstruction 25k Fail Success 27.6k reconstruction jobs in events upload done state /Created 21.2k 2k jobs processed 25k events Done/25k events ~77% 3.0k jobs failed to upload rdst to local l SE Only 1 attempt before trying Failover Failover/25k events ~13% NIKHEF 1.2k (53%) PIC 1.6k (99%) 0.9k (70%) 0.0k (0%) 4% 89% RAL 3.1k 0.0k 68% (89%) (1%) CERN 5.2k (100%) CNAF 2.6k (95%) 0.7k (14%) 0.0k (1%) 76% 67% GridKa 3.0k 0.7k 58% (99%) (22%) IN2P3 5.1k (90%) 0.7k (14%) 43% 11
12 12
13 Error humano en el PIC: WN con la red desconfigurada de Mayo Hacía de black-hole (ticket-4386) 13
14 Reconstruction CPU efficiency: ratio of wall/cpu time on running jobs CNAF: more jobs than cores on a WN IN2P3 & RAL: Problems reading input data 14
15 Reconstruction CPU efficiency: ratio of wall/cpu time on running jobs PIC: The most cpu-efficient T1 15
16 dcache Observations Official LCG recommendation p3 LHCb ran smoothly at half of T1 dcache sites PIC OK - version p6 12p6 (dcap) GridKa OK - version p2 (dcap) IN2P3 -problematic - version p6 (gsidcap) Seg faults - needed to ship version of GFAL to run Could explain CGSI-gSOAP problem???? NL-T1 - problematic (gsidcap) Many versions during CCRC to solve number of issues > p3-> p4 p 16
17 Databases Conditions DB used at CERN & Tier-1 centres No replication tests of conditions DB Pit Tier-0 (and beyond) Switched to using Conditions DB 15th May for reconstruction LFC Use streaming to populate the read-only instance at T1 from CERN Problem with CERN instance revealed local l instances not being used by LHCb! Testing underway now 17
18 Stripping Stripping on rdst files 1 rdst file & associated RAW file Space tokens: LHC_RAW & LHCb_rDST DST files & ETC produced during the process stored locally on T1D1 (add storage class) Space tokens: LHCb_M-DST DST & ETC file then distributed to all other computing centres on T0D1 (except CERN T1D1) Space tokens: LHCb_DST (LHCb_M-DST) 18
19 Stripping Subm Done CERN 2.4k 2.3k CNAF 2.3k 2.0k GridKa 20k 2.0k 20k 2.0k IN2P3 4.5k 0.2k NIKHEF 03k 0.3k <0.1k 01k 31.8k stripping jobs were submitted PIC 1.1k 1.1k RAL 2.2k 1.6k Failed to resolve 17.0k datasets 9.3k jobs ran to Done Major issues with LHCb book-keeping 19
20 Stripping: T1-T1 transfers CNAF PIC Initial problems uploading to M-DST Token at PIC Catch up ok once solved GridKa RAL Stripping test limited to 4 T1 centres 20
21 Conclusiones A pesar de ser el Tier-1 más pequeño de LHCb, la calidad de servicio del PIC ha sido la más alta en el CCRC08 Se han testeado los siguientes procesos para los Tier-1 Recepción de datos desde el CERN Reconstrucción Stripping y envío de DST a otros Tier-1 Los resultados en el PIC han sido positivos Recepción de datos desde el CERN (~5MB/s) Lectura de datos desde WNs (dcap) OK Demostrada replicación de DST a otros Tier-1s a más velocidad de la requerida (catch-up) El ejercicio ha sido también útil para que LHCb detecte los puntos débiles de su infraestructura Grid DIRAC Mejorar el sistema de book-keeping, log-files, etc 21
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
More informationForschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
More informationLHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch. CERN, July 2007
WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.
More informationATLAS Software and Computing Week April 4-8, 2011 General News
ATLAS Software and Computing Week April 4-8, 2011 General News Refactor requests for resources (originally requested in 2010) by expected running conditions (running in 2012 with shutdown in 2013) 20%
More informationdcache, a managed storage in grid
dcache, a managed storage in grid support and funding by Patrick for the dcache Team Topics Project Topology Why do we need storage elements in the grid world? The idea behind the LCG (glite) storage element.
More informationTier0 plans and security and backup policy proposals
Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle
More informationJohn Kennedy LMU München DESY HH seminar 18/06/2007
ATLAS Data Management in the GridKa Cloud John Kennedy LMU München DESY HH seminar Overview Who am I Cloud Overview DDM Design DDM OPS in the DE Cloud Other issues Conclusion 2 Who Am I and what do I do
More informationForschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction
dcache Introduction Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. http://www.gridka.de What is dcache? Developed at DESY and FNAL Disk
More informationATLAS grid computing model and usage
ATLAS grid computing model and usage RO-LCG workshop Magurele, 29th of November 2011 Sabine Crépé-Renaudin for the ATLAS FR Squad team ATLAS news Computing model : new needs, new possibilities : Adding
More informationGrid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
More informationProposals for Site Monitoring Solutions
EGI-InSPIRE Proposals for Site Monitoring Solutions Elisa Lanciotti (CERN IT-ES) Overview Define the objectives of a monitoring tool for sites: VOs have specific monitoring systems (Dashboard, Dirac, Monalisa..)
More informationThe LCG Distributed Database Infrastructure
The LCG Distributed Database Infrastructure Dirk Düllmann, CERN & LCG 3D DESY Computing Seminar 21. May 07 CERN - IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Outline of the Talk Why databases
More informationDistributed Computing for CEPC. YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep. 12-13, 2014 1 Outline Introduction Experience of BES-DIRAC Distributed
More informationCERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006
CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 1 Introduction Different h/w used for GRID services Various techniques & First
More informationThe dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
More informationThe CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
More informationMass Storage at GridKa
Mass Storage at GridKa Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. Doris Ressmann http://www.gridka.de 1 Overview What is dcache? Pool
More informationGridKa: Roles and Status
GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre
More informationEvolution of Database Replication Technologies for WLCG
Home Search Collections Journals About Contact us My IOPscience Evolution of Database Replication Technologies for WLCG This content has been downloaded from IOPscience. Please scroll down to see the full
More informationService Challenge Tests of the LCG Grid
Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources
More informationdcache - Managed Storage - LCG Storage Element - HSM optimizer Patrick Fuhrmann, DESY for the dcache Team
dcache - Managed Storage - LCG Storage Element - HSM optimizer, DESY for the dcache Team dcache is a joint effort between the Deutsches Elektronen Synchrotron (DESY) and the Fermi National Laboratory (FNAL)
More informationRecovery and Backup TIER 1 Experience, status and questions. RMAN Carlos Fernando Gamboa, BNL Gordon L Brown, RAL Meeting at CNAF June 12-1313 of 2007, Bologna, Italy 1 Table of Content Factors that define
More informationThe LHCb Software and Computing NSS/IEEE workshop Ph. Charpentier, CERN
The LHCb Software and Computing NSS/IEEE workshop Ph. Charpentier, CERN QuickTime et un décompresseur TIFF (LZW) sont requis pour visionner cette image. 011010011101 10101000101 01010110100 B00le Outline
More informationCMS Computing Model: Notes for a discussion with Super-B
CMS Computing Model: Notes for a discussion with Super-B Claudio Grandi [ CMS Tier-1 sites coordinator - INFN-Bologna ] Daniele Bonacorsi [ CMS Facilities Ops coordinator - University of Bologna ] 1 Outline
More informationAnalisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
More informationATLAS GridKa T1/T2 Status
ATLAS GridKa T1/T2 Status GridKa TAB, FZK, 19 Oct 2007 München GridKa T1/T2 status Production and data management operations Computing team & cloud organization T1/T2 meeting summary Site monitoring/gangarobot
More informationAlternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC
EGEE and glite are registered trademarks Enabling Grids for E-sciencE Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC Elisa Lanciotti, Arnau Bria, Gonzalo
More informationSoftware, Computing and Analysis Models at CDF and D0
Software, Computing and Analysis Models at CDF and D0 Donatella Lucchesi CDF experiment INFN-Padova Outline Introduction CDF and D0 Computing Model GRID Migration Summary III Workshop Italiano sulla fisica
More informationImprovement Options for LHC Mass Storage and Data Management
Improvement Options for LHC Mass Storage and Data Management Dirk Düllmann HEPIX spring meeting @ CERN, 7 May 2008 Outline DM architecture discussions in IT Data Management group Medium to long term data
More informationData Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version The Particle Physics Experiment Consolidated Grant proposals now being submitted
More information(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015
(Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:
More informationLHCb Software Installation Tools. Stuart K. Paterson Ganga Workshop (Tuesday 14th June) 1
LHCb Software Installation Tools Stuart K. Paterson Ganga Workshop (Tuesday 14th June) 1 Contents Introduction Current Situation in LHCb From Source Pacman Distribution Software Distribution Tool DIRAC
More informationPropiedades del esquema del Documento XML de envío:
Web Services Envio y Respuesta DIPS Courier Tipo Operación: 122-DIPS CURRIER/NORMAL 123-DIPS CURRIER/ANTICIP Los datos a considerar para el Servicio Web DIN que se encuentra en aduana son los siguientes:
More informationFile Transfer Software and Service SC3
File Transfer Software and Service SC3 Gavin McCance JRA1 Data Management Cluster Service Challenge Meeting April 26 2005, Taipei www.eu-egee.org Outline Overview of Components Tier-0 / Tier-1 / Tier-2
More informationGuidelines for Designing Web Maps - An Academic Experience
Guidelines for Designing Web Maps - An Academic Experience Luz Angela ROCHA SALAMANCA, Colombia Key words: web map, map production, GIS on line, visualization, web cartography SUMMARY Nowadays Internet
More informationData Management. Issues
Data Management Issues Lassi A. Tuura Northeastern University ARDA Workshop 22 June 2004 Part I Data Management at DC04 ARDA Workshop 22 June 2004 DC04 Transfer System Data challenge started in March 2004
More informationRO-11-NIPNE, evolution, user support, site and software development. IFIN-HH, DFCTI, LHCb Romanian Team
IFIN-HH, DFCTI, LHCb Romanian Team Short overview: The old RO-11-NIPNE site New requirements from the LHCb team User support ( solution offered). Data reprocessing 2012 facts Future plans The old RO-11-NIPNE
More informationHAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
More informationLCG POOL, Distributed Database Deployment and Oracle Services@CERN
LCG POOL, Distributed Database Deployment and Oracle Services@CERN Dirk Düllmann, D CERN HEPiX Fall 04, BNL Outline: POOL Persistency Framework and its use in LHC Data Challenges LCG 3D Project scope and
More informationNetwork monitoring in DataGRID project
Network monitoring in DataGRID project Franck Bonnassieux (CNRS) franck.bonnassieux@ens-lyon.fr 1st SCAMPI Workshop 27 Jan. 2003 DataGRID Network Monitoring Outline DataGRID network Specificity of Grid
More informationMaurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL
Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive
More informationManaged Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team
Managed Storage @ GRID or why NFSv4.1 is not enough Tigran Mkrtchyan for dcache Team What the hell do physicists do? Physicist are hackers they just want to know how things works. In moder physics given
More informationMonitoreo de Bases de Datos
Monitoreo de Bases de Datos Monitoreo de Bases de Datos Las bases de datos son pieza fundamental de una Infraestructura, es de vital importancia su correcto monitoreo de métricas para efectos de lograr
More informationGridKa site report. Manfred Alef, Andreas Heiss, Jos van Wezel. www.kit.edu. Steinbuch Centre for Computing
GridKa site report Manfred Alef, Andreas Heiss, Jos van Wezel Steinbuch Centre for Computing KIT The cooperation of and Universität Karlsruhe (TH) www.kit.edu KIT? SCC? { = University ComputingCentre +
More informationMass Storage System for Disk and Tape resources at the Tier1.
Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage pierpaolo.ricci@cnaf.infn.it ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk
More informationTier-1 Services for Tier-2 Regional Centres
Tier-1 Services for Tier-2 Regional Centres The LHC Computing MoU is currently being elaborated by a dedicated Task Force. This will cover at least the services that Tier-0 (T0) and Tier-1 centres (T1)
More informationData storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
More informationPráctica 1: PL 1a: Entorno de programación MathWorks: Simulink
Práctica 1: PL 1a: Entorno de programación MathWorks: Simulink 1 Objetivo... 3 Introducción Simulink... 3 Open the Simulink Library Browser... 3 Create a New Simulink Model... 4 Simulink Examples... 4
More informationNetworks for Research and Education in Europe in the Age of Fibre - Where do we move? -
Networks for Research and Education in Europe in the Age of Fibre - Where do we move? - Klaus Ullmann GN2 Exec, DANTE Board and DFN CUC 2005, Dubrovnik, November 2005 Seite 1 Contents 1. NREN Constituency
More informationDistributed Database Access in the LHC Computing Grid with CORAL
Distributed Database Access in the LHC Computing Grid with CORAL Dirk Duellmann, CERN IT on behalf of the CORAL team (R. Chytracek, D. Duellmann, G. Govi, I. Papadopoulos, Z. Xie) http://pool.cern.ch &
More informationEmbarcadero DBArtisan
Administración de Bases de Datos Embarcadero DBArtisan Raul Merida Servicios Profesionales Haciendo Visible lo invisible Agenda Tendencias Ratios DBArtisan Beneficios Características Demostración del producto
More informationTechniques for implementing & running robust and reliable DB-centric Grid Applications
Techniques for implementing & running robust and reliable DB-centric Grid Applications International Symposium on Grid Computing 2008 11 April 2008 Miguel Anjo, CERN - Physics Databases Outline Robust
More informationDCMS Tier 2/3 prototype infrastructure
DCMS Tier 2/3 prototype infrastructure 1 Anja Vest, Uni Karlsruhe DCMS Meeting in Aachen, Overview LCG Queues/mapping set up Hardware capacities Supported software Summary DCMS overview CMS DCMS: Tier
More informationEvolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it
Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it 1 In 2001 the project of the Italian Tier1 in Bologna at CNAF was born. First computers were based on Intel Pentium III
More informationThe glite File Transfer Service
Enabling Grids Enabling for E-sciencE Grids for E-sciencE The glite File Transfer Service Paolo Badino On behalf of the JRA1 Data Management team EGEE User Forum - CERN, 2 Mars 2006 www.eu-egee.org Outline
More informationAndrew Hanushevsky SLAC National Accelerator Laboratory Stanford University 19-August-2009 Atlas Tier 2/3 Meeting. http://xrootd.slac.stanford.
Scalla/xrootd Andrew Hanushevsky SLAC National Accelerator Laboratory Stanford University 19-August-2009 Atlas Tier 2/3 Meeting http://xrootd.slac.stanford.edu/ Outline System Component Summary Recent
More informationNew Server Installation. Revisión: 13/10/2014
Revisión: 13/10/2014 I Contenido Parte I Introduction 1 Parte II Opening Ports 3 1 Access to the... 3 Advanced Security Firewall 2 Opening ports... 5 Parte III Create & Share Repositorio folder 8 1 Create
More informationBack Up and Restore. Section 11. Introduction. Backup Procedures
Back Up and Restore Section 11 Introduction Backup Procedures This section provides information on how to back up and restore system data for the purpose of an upgrade. These databases should be part of
More informationVMware vsphere with Operations Management: Fast Track
VMware vsphere with Operations Management: Fast Track Duración: 5 Días Código del Curso: VSOMFT Método de Impartición: Curso Cerrado (In-Company) Temario: Curso impartido directamente por VMware This intensive,
More informationThe CMS Tier0 goes Cloud and Grid for LHC Run 2. Dirk Hufnagel (FNAL) for CMS Computing
The CMS Tier0 goes Cloud and Grid for LHC Run 2 Dirk Hufnagel (FNAL) for CMS Computing CHEP, 13.04.2015 Overview Changes for the Tier0 between Run 1 and Run 2 CERN Agile Infrastructure (in GlideInWMS)
More informationStorage strategy and cloud storage evaluations at CERN Dirk Duellmann, CERN IT
SS Data & Storage Storage strategy and cloud storage evaluations at CERN Dirk Duellmann, CERN IT (with slides from Andreas Peters and Jan Iven) 5th International Conference "Distributed Computing and Grid-technologies
More informationStatus and Evolution of ATLAS Workload Management System PanDA
Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The
More informationRanking de Universidades de Grupo of Eight (Go8)
En consecuencia con el objetivo del programa Becas Chile el cual busca a través de la excelencia de las instituciones y programas académicos de destino cerciorar que los becarios estudien en las mejores
More informationNetwork & HEP Computing in China. Gongxing SUN CJK Workshop & CFI
Network & HEP Computing in China Gongxing SUN CJK Workshop & CFI Outlines IPV6 deployment SDN for HEP data transfer Dirac Computing Model on IPV6 Volunteer Computing Future Work IPv6@IHEP-Deployment Internet
More informationStatus of Grid Activities in Pakistan. FAWAD SAEED National Centre For Physics, Pakistan
Status of Grid Activities in Pakistan FAWAD SAEED National Centre For Physics, Pakistan 1 Introduction of NCP-LCG2 q NCP-LCG2 is the only Tier-2 centre in Pakistan for Worldwide LHC computing Grid (WLCG).
More informationHappyFace for CMS Tier-1 local job monitoring
HappyFace for CMS Tier-1 local job monitoring G. Quast, A. Scheurer, M. Zvada CMS Offline & Computing Week CERN, April 4 8, 2011 INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK, KIT 1 KIT University of the State
More informationDECLARATION OF PERFORMANCE NO. HU-DOP_TD-25_001
NO. HU-DOP_TD-25_001 Product type TD 3,5x25 mm EN 14566:2008+A1:2009 NO. HU-DOP_TD-35_001 Product type TD 3,5x35 mm EN 14566:2008+A1:2009 NO. HU-DOP_TD-45_001 Product type TD 3,5x45 mm EN 14566:2008+A1:2009
More informationSoftware installation and condition data distribution via CernVM File System in ATLAS
Software installation and condition data distribution via CernVM File System in ATLAS A De Salvo 1, A De Silva 2, D Benjamin 3, J Blomer 4, P Buncic 4, A Harutyunyan 4, A. Undrus 5, Y Yao 6 on behalf of
More informationDcache Support and Strategy
Helmholtz Alliance 2nd Grid Workshop HGF Mass Storage Support Group Christoph Anton Mitterer christoph.anton.mitterer@physik.uni-muenchen.de for the group Group members Filled positions Christopher Jung
More informationINTELIGENCIA DE NEGOCIO CON SQL SERVER
INTELIGENCIA DE NEGOCIO CON SQL SERVER Este curso de Microsoft e-learning está orientado a preparar a los alumnos en el desarrollo de soluciones de Business Intelligence con SQL Server. El curso consta
More informationSURFsara Data Services
SURFsara Data Services SUPPORTING DATA-INTENSIVE SCIENCES Mark van de Sanden The world of the many Many different users (well organised (international) user communities, research groups, universities,
More informationPhase planning today. Planificación por fases ahora. Phase planning today. Steve Knapp* 1, Roberto Charron*, Gregory Howell*
Phase planning today Phase planning today Planificación por fases ahora Steve Knapp* 1, Roberto Charron*, Gregory Howell* * Lean Project Consulting, Louisville, USA Best paper award 14th Annual Conference
More informationDatabase Services for Physics @ CERN
Database Services for Physics @ CERN Deployment and Monitoring Radovan Chytracek CERN IT Department Outline Database services for physics Status today How we do the services tomorrow? Performance tuning
More informationIPv6 Traffic Analysis and Storage
Report from HEPiX 2012: Network, Security and Storage david.gutierrez@cern.ch Geneva, November 16th Network and Security Network traffic analysis Updates on DC Networks IPv6 Ciber-security updates Federated
More informationHow To Use A Transport X Portable Dga
GE Energy TRANSPORT X Portable DGA Argentina 2013 1 6/8/2013 TRANSPORT X Portable dissolved gas analyser For rapid on-site DGA anytime Uses Photo-Acoustic technology Extracts gases from oil using head
More informationlearning science through inquiry in primary classroom Overview of workshop
Indicators of teaching and learning science through inquiry in primary classroom Wynne Harlen UK Overview of workshop Part 1: Why IBSE? What do we want to achieve through it? Part 2: Thinking about how
More informationOnPremise y en la nube
La estrategia de Integración Microsoft OnPremise y en la nube Beacon42 - GABRIEL COR Creando valor a través de la integración ESB es antiguo ahora es NoESB SOA Todo es un API EAI es major que Batch
More informationLinux and the Higgs Particle
Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004 Outline What is CERN The Physics The Physics Tools The Accelerator The
More informationNT1: An example for future EISCAT_3D data centre and archiving?
March 10, 2015 1 NT1: An example for future EISCAT_3D data centre and archiving? John White NeIC xx March 10, 2015 2 Introduction High Energy Physics and Computing Worldwide LHC Computing Grid Nordic Tier
More informationData Storage Accounting and Verification at LHC experiments
Home Search Collections Journals About Contact us My IOPscience Data Storage Accounting and Verification at LHC experiments This content has been downloaded from IOPscience. Please scroll down to see the
More informationStreams Intervention. s Eva Dafonte Pérez. CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it
Streams Intervention Responsibilities s Eva Dafonte Pérez Initial Setup Target database preparation - Tier1 Streams initialization parameters Listener configuration Streams administrator user creation
More informationRecuperación de Desastres en el Cloud. Carlos Silva Solution Architect Vision Solutions
Recuperación de Desastres en el Cloud. Carlos Silva Solution Architect Vision Solutions 1 Y se hizo el Cloud. 2 Migrate, Protect & Recover Anywhere * Forrester: Prepare For 2020: Transform Your IT Infrastructure
More informationThe Spanish Distributed TIER-2 for the ATLAS experiment of LHC
The Spanish Distributed TIER-2 for the ATLAS experiment of LHC COORDINATED PROJECT ES-ATLAS-T2 logo Universidad Autónoma de Madrid (Madrid) Instituto de Física de Altas Energías (Barcelona) Instituto de
More informationPROCEDIMIENTOPARALAGENERACIÓNDEMODELOS3DPARAMÉTRICOSA PARTIRDEMALLASOBTENIDASPORRELEVAMIENTOCONLÁSERESCÁNER
PROCEDIMIENTOPARALAGENERACIÓNDEMODELOS3DPARAMÉTRICOSA PARTIRDEMALLASOBTENIDASPORRELEVAMIENTOCONLÁSERESCÁNER Lopresti,LauraA.;Lara, Marianela;Gavino,Sergio;Fuertes,LauraL.;Defranco,GabrielH. UnidaddeInvestigación,DesaroloyTransferencia-GrupodeIngenieríaGráficaAplicada
More informationD755M CONTROL CARD FOR TWO SINGLE-PHASE MOTORS 220/230 VAC TARJETA DE MANDO PARA DOS MOTORES MONOFÁSICOS 220/230 VAC INSTALLATION GUIDE
Distributed by: AFW Access Systems Phone: 305-691-7711 Fax: 305-693-1386 E-mail: sales@anchormiami.com D755M CONTROL CARD FOR TWO SINGLE-PHASE MOTORS 220/230 VAC TARJETA DE MANDO PARA DOS MOTORES MONOFÁSICOS
More informationOSG PUBLIC STORAGE. Tanya Levshina
PUBLIC STORAGE Tanya Levshina Motivations for Public Storage 2 data to use sites more easily LHC VOs have solved this problem (FTS, Phedex, LFC) Smaller VOs are still struggling with large data in a distributed
More informationIntegrating a heterogeneous and shared Linux cluster into grids
Integrating a heterogeneous and shared Linux cluster into grids 1,2 1 1,2 1 V. Büge, U. Felzmann, C. Jung, U. Kerzel, 1 1 1 M. Kreps, G. Quast, A. Vest 1 2 DPG Frühjahrstagung March 28 31, 2006 Dortmund
More information5-Port Gigabit GREENnet Switch TEG-S5g ŸGuía de instalación rápida (1) ŸTechnical Specifications (3) ŸTroubleshooting (4)
5-Port Gigabit GREENnet Switch TEG-S5g ŸGuía de instalación rápida (1) ŸTechnical Specifications (3) ŸTroubleshooting (4) 2.02 1. Antes de iniciar Contenidos del paquete ŸTEG-S5g ŸGuía de instalación rápida
More informationTier 1 Services - CNAF to T1
CDF Report on Tier 1 Usage Donatella Lucchesi for the CDF Italian Computing Group INFN Padova Outline The CDF Computing Model Tier1 resources usage as today CDF portal for European GRID: lcgcaf People
More informationSales Management Main Features
Sales Management Main Features Optional Subject (4 th Businesss Administration) Second Semester 4,5 ECTS Language: English Professor: Noelia Sánchez Casado e-mail: noelia.sanchez@upct.es Objectives Description
More informationE-mail: guido.negri@cern.ch, shank@bu.edu, dario.barberis@cern.ch, kors.bos@cern.ch, alexei.klimentov@cern.ch, massimo.lamanna@cern.
*a, J. Shank b, D. Barberis c, K. Bos d, A. Klimentov e and M. Lamanna a a CERN Switzerland b Boston University c Università & INFN Genova d NIKHEF Amsterdam e BNL Brookhaven National Laboratories E-mail:
More informationImplicaciones para. CISA, CISM, CGEIT, CRISC, CISSP, OSCP, Cobit FC, ITIL v3 FC
La computación en nube Implicaciones para Auditoría y Seguridad d Ing. Miguel Angel Aranguren Romero Ing. Miguel Angel Aranguren Romero CISA, CISM, CGEIT, CRISC, CISSP, OSCP, Cobit FC, ITIL v3 FC Introducción
More informationEGEE is a project funded by the European Union under contract IST-2003-508833
www.eu-egee.org NA4 Applications F.Harris(Oxford/CERN) NA4/HEP coordinator EGEE is a project funded by the European Union under contract IST-2003-508833 Talk Outline The basic goals of NA4 The organisation
More informationS/4HANA: La nueva generación del Business Suite. Generando valor a través de la innovación Facundo Podestá Platform And Solutions Group
S/4HANA: La nueva generación del Business Suite Generando valor a través de la innovación Facundo Podestá Platform And Solutions Group El nuevo Business Suite R/1 R/2 R/3 ERP 1973 1979 1992 2004 SoH 2013
More informationCDFII Computing Status
CDFII Computing Status OUTLINE: New CDF-Italy computing group organization Usage status at FNAL and CNAF Towards GRID: where we are Plans and requests 22/04/2005 Donatella Lucchesi 1 CDFII Computing group
More informationScientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015 Index - Storage use cases - Bluearc - Lustre - EOS - dcache disk only - dcache+enstore Data distribution by solution
More informationComputing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
More informationAccessories. Power Supply. Stripes. YourLED/ Function. Flex-Connector 10cm Flex-Connector 100cm Power Supply 15W 230/12V DC. YourLED Stripe 975mm
Your/ Function Stripes Accessories Power Supply Your Stripe 975mm -Connector 10cm -Connector 100cm Power Supply 15W 230/12V DC Your ECO Stripe 500mm 90 Connector Set Junctionbox Power Supply 48W 230/12V
More informationSWWRAC (problemas de datos, afeccion de estos a la evaluacion y consejo) Anglerfish (L. piscatorius & L.budegassa) in Subarea VII & Divisions
1 SWWRAC (problemas de datos, afeccion de estos a la evaluacion y consejo) Anglerfish (L. piscatorius & L.budegassa) in Subarea VII & Divisions VIIIa,b,d,e OVERVIEW 2 Megrim (Divisions VIIb-k and VIIIabd):
More informationKIT Site Report. Andreas Petzold. www.kit.edu STEINBUCH CENTRE FOR COMPUTING - SCC
KIT Site Report Andreas Petzold STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu GridKa Tier 1 - Batch
More information