Grid Computing in Aachen

Size: px
Start display at page:

Download "Grid Computing in Aachen"

Transcription

1 GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef,

2 Concept of Grid Computing Computing Grid; like the power grid, but for computing and storage resources key features of a Computing Grid:! full resource availability at every single client computer! grid sites can be distributed around the world! standardised protocols, data formats and environments! advantages: scalability, low costs, reliability! disadvantages: troubleshooting, maintenance 2

3 Concept of Grid Computing Analysis CPU Cluster Tape Storage Analysis Application Disk Storage CPU Cluster Super Computer Disk Storage Super Computer users resources 3

4 Grid Infrastructure for the LHC Tier-2 Tier-2 Tier-2 GridKa Tier-2 IN2P3 TRIUMF Tier-2 BNL Tier-2 ASCC Tier-2 Nordic FNAL Tier-2 CNAF Tier-2 Tier-2 SARA PIC RAL 4

5 Infrastructure of a typical Tier-2 site site information CE SE network information between this and other sites WN WN WN site te information WN WN WN CE WN SE status SEs that are close network information (not necessarily at the same between site) this and other sites supported protocols file statistics 5

6 Job Submission Job Status Resource Broker Node Replica Catalog User Interface Network Server RB Storage Logging & Bookkeeping MatchMaker/Broker Workload Manager Log Monitor Information Service Job Adapter Job Contr. (CondorG) CE SE 6

7 Job Submission Job Status submitted Jo b User Interface Resource Broker Node ut Inp MatchMaker/Broker Information Service ox db n Sa Network Server Replica Catalog RB Storage Logging & Bookkeeping Workload Manager Log Monitor Job Adapter Job Contr. (CondorG) CE SE 7

8 Job Submission Job Status Resource Broker Node submitted Replica Catalog User Interface waiting Network Server MatchMaker/Broker Information Service b Jo RB Storage Logging & Bookkeeping Workload Manager Log Monitor Job Adapter Job Contr. (CondorG) CE SE 8

9 Job Submission Job Status Resource Broker Node submitted Replica Catalog User Interface waiting MatchMaker/Broker Network Server Information Service ready RB Storage Workload Manager Job Adapter b Jo Logging & Bookkeeping Log Monitor Job Contr. (CondorG) CE SE 9

10 Job Submission Job Status Resource Broker Node submitted Replica Catalog User Interface waiting Network Server MatchMaker/Broker Information Service ready RB Storage scheduled Logging & Bookkeeping Workload Manager Log Monitor Job Contr. (CondorG) Job Adapter Job CE SE 10

11 Job Submission Job Status Resource Broker Node submitted Replica Catalog User Interface waiting Network Server MatchMaker/Broker Information Service ready RB Storage scheduled Workload Manager Job Adapter data transfers/ accesses running Logging & Bookkeeping Log Monitor Job Contr. (CondorG) CE SE 11

12 Job Submission Job Status Resource Broker Node submitted Replica Catalog User Interface waiting Network Server MatchMaker/Broker Information Service ready RB Storage scheduled running done Logging & Bookkeeping Workload Manager Log Monitor Job Adapter Job Contr. (CondorG) Output Sandbox CE SE 12

13 Job Submission Job Status Resource Broker Node submitted Replica Catalog User Interface Network Server MatchMaker/Broker Information Service ut tp Ou waiting nd Sa x bo ready RB Storage scheduled running done Logging & Bookkeeping Workload Manager Log Monitor Job Adapter Job Contr. (CondorG) CE SE cleared 13

14 CMS Grid Computing CMS data structure! RAW detector data and Level-1 information 1.5 MB/evt, 2!copies, 5 PB/y COMMUNICATION 40 MHz COLLISION RATE 100 khz LEVEL-1 TRIGGER PROCESSING 16 Million channels 3 Gigacell buffers 1 Megabyte EVENT DATA! RECO REConstructed Objects 250 kb/evt, 3 copies, 2.1 PB/y 1 Terabit/s (50k DATA CHANNELS) 200 Gigabyte BUFFERS 500 Readout memories! AOD Analysis Objects Data 50 kb/evt, 1 copy@t1, 2.6 PB/y 500 Gigabit/s SWITCH NETWORK EVENT BUILDER! TAG high level physics objects and run info (event directory) ~10 kb/evt 150 Hz FILTERED EVENT Gigabit/s SERVICE LAN 5 TeraFLOP EVENT FILTER Petabyte ARCHIVE 14

15 Federated CMS-T2 RWTH & DESY supported Virtual Organisations (VO) + OBSERVATORY 90% 5% 5% Monte Carlo production mainly in Aachen DESY offers large tape storage and disk space host space for QCD, JetMET, SUSY, Top, Tracker, FWD-Physics GridKa in Karlsruhe as associated Tier-1 15

16 Aachen s Grid Structure Aachen s Grid Deployment Team:! Walter Bender, Achim Burdziak, Manuel Giffels, Carsten Hof, Sergey Kalinin, Thomas Kreß, Andreas Nowack,, Peter Schiffer, Daiske Tornier, Oleg Tsigenov, Clemens Zeidler shift crew (one person per week)! monitors hardware, transfers, production, storage, network,! problems are communicated to the experts weekly strategy meetings for overall discussions and organisational matters ticket system for task structuring and documentary purposes 16

17 Site Administration HP Onboard Administration (Integrated Lights-Out 2) 17

18 Site Monitoring hardware infrastructure PhEDEx transfers 18

19 Site Monitoring Monte Carlo production disk storage 19

20 Site Monitoring & Services grid services at RWTH-T2:! Computing Element! dcache! disk storage! glite 3.1! middleware! CMS software framework! Crabserver! job management! DDBS! database for datasets! PhEDEx! transfers! Auger software external status! 20

21 Tier-2 in Aachen Prototype System prototype system till March 2008! located in the Physics Center! air conditioned room with 30 kw cooling capacity! 37 worker nodes with a total of ~100!CPU cores! 30 TB disk space! 2 GBit/s wan link speed! 1 GBit/s interconnection speed Sep

22 Tier-2 in Aachen Production System production system since April 2008! located in the RWTH IT-Center! installed in water-cooled racks with 160 kw cooling capacity! homogeneous hardware from HP! 253 worker nodes with a total of 2024!CPU cores! 530 TB disk storage! 2 10 GBit/s wan link speed! 1-4 GBit/s interconnection speed 22

23 Tier-2 in Aachen Production System 23

24 Tier-2 in Aachen Production System 24

25 Aachen s Computing Power Tier-2 area! CPU power DESY-HH RWTH-Aachen 25

26 Summary Grid Computing; the key technology for LHC physics analyses extremely high computing and storage resources transparent access for the end-users high effort needed to run a Tier-2 Aachen s Tier-2 got a major hardware upgrade this spring production system (hard- & software) runs stable we are ready for the LHC start-up! ;-) 26

The CMS analysis chain in a distributed environment

The CMS analysis chain in a distributed environment The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration

More information

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH

More information

DCMS Tier 2/3 prototype infrastructure

DCMS Tier 2/3 prototype infrastructure DCMS Tier 2/3 prototype infrastructure 1 Anja Vest, Uni Karlsruhe DCMS Meeting in Aachen, Overview LCG Queues/mapping set up Hardware capacities Supported software Summary DCMS overview CMS DCMS: Tier

More information

Tier0 plans and security and backup policy proposals

Tier0 plans and security and backup policy proposals Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle

More information

Software, Computing and Analysis Models at CDF and D0

Software, Computing and Analysis Models at CDF and D0 Software, Computing and Analysis Models at CDF and D0 Donatella Lucchesi CDF experiment INFN-Padova Outline Introduction CDF and D0 Computing Model GRID Migration Summary III Workshop Italiano sulla fisica

More information

Service Challenge Tests of the LCG Grid

Service Challenge Tests of the LCG Grid Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources

More information

(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015

(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 (Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:

More information

LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch. CERN, July 2007

LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch. CERN, July 2007 WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.

More information

The dcache Storage Element

The dcache Storage Element 16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG

More information

High Availability Databases based on Oracle 10g RAC on Linux

High Availability Databases based on Oracle 10g RAC on Linux High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database

More information

Status and Evolution of ATLAS Workload Management System PanDA

Status and Evolution of ATLAS Workload Management System PanDA Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The

More information

Dcache Support and Strategy

Dcache Support and Strategy Helmholtz Alliance 2nd Grid Workshop HGF Mass Storage Support Group Christoph Anton Mitterer christoph.anton.mitterer@physik.uni-muenchen.de for the group Group members Filled positions Christopher Jung

More information

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

CMS Tier-3 cluster at NISER. Dr. Tania Moulik CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend

More information

NT1: An example for future EISCAT_3D data centre and archiving?

NT1: An example for future EISCAT_3D data centre and archiving? March 10, 2015 1 NT1: An example for future EISCAT_3D data centre and archiving? John White NeIC xx March 10, 2015 2 Introduction High Energy Physics and Computing Worldwide LHC Computing Grid Nordic Tier

More information

ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC

ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC Wensheng Deng 1, Alexei Klimentov 1, Pavel Nevski 1, Jonas Strandberg 2, Junji Tojo 3, Alexandre Vaniachine 4, Rodney

More information

HIP Computing Resources for LHC-startup

HIP Computing Resources for LHC-startup HIP Computing Resources for LHC-startup Tomas Lindén Finnish CMS meeting in Kumpula 03.10. 2007 Kumpula, Helsinki October 3, 2007 1 Tomas Lindén Contents 1. Finnish Tier-1/2 computing in 2007 and 2008

More information

Linux and the Higgs Particle

Linux and the Higgs Particle Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004 Outline What is CERN The Physics The Physics Tools The Accelerator The

More information

BaBar and ROOT data storage. Peter Elmer BaBar Princeton University ROOT2002 14 Oct. 2002

BaBar and ROOT data storage. Peter Elmer BaBar Princeton University ROOT2002 14 Oct. 2002 BaBar and ROOT data storage Peter Elmer BaBar Princeton University ROOT2002 14 Oct. 2002 The BaBar experiment BaBar is an experiment built primarily to study B physics at an asymmetric high luminosity

More information

Das HappyFace Meta-Monitoring Framework

Das HappyFace Meta-Monitoring Framework Das HappyFace Meta-Monitoring Framework B. Berge, M. Heinrich, G. Quast, A. Scheurer, M. Zvada, DPG Frühjahrstagung Karlsruhe, 28. März 1. April 2011 KIT University of the State of Baden-Wuerttemberg and

More information

A multi-dimensional view on information retrieval of CMS data

A multi-dimensional view on information retrieval of CMS data A multi-dimensional view on information retrieval of CMS data A. Dolgert, L. Gibbons, V. Kuznetsov, C. D. Jones, D. Riley Cornell University, Ithaca, NY 14853, USA E-mail: vkuznet@gmail.com Abstract. The

More information

EDG Project: Database Management Services

EDG Project: Database Management Services EDG Project: Database Management Services Leanne Guy for the EDG Data Management Work Package EDG::WP2 Leanne.Guy@cern.ch http://cern.ch/leanne 17 April 2002 DAI Workshop Presentation 1 Information in

More information

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Volker Büge 1, Marcel Kunze 2, OIiver Oberst 1,2, Günter Quast 1, Armin Scheurer 1 1) Institut für Experimentelle

More information

CMS Computing Model: Notes for a discussion with Super-B

CMS Computing Model: Notes for a discussion with Super-B CMS Computing Model: Notes for a discussion with Super-B Claudio Grandi [ CMS Tier-1 sites coordinator - INFN-Bologna ] Daniele Bonacorsi [ CMS Facilities Ops coordinator - University of Bologna ] 1 Outline

More information

Recent grid activities at INFN Catania(*) Roberto Barbera

Recent grid activities at INFN Catania(*) Roberto Barbera RecentgridactivitiesatINFNCatania(*) RobertoBarbera workincollaborationwithnicesrl (*) HEPiX/HEPNT2002,Catania,18.04.2002 CHEP2000,10.02.2000 1 RobertoBarbera DipartimentodiFisicadell UniversitàdiCataniaandINFNCatania

More information

Integrating a heterogeneous and shared Linux cluster into grids

Integrating a heterogeneous and shared Linux cluster into grids Integrating a heterogeneous and shared Linux cluster into grids 1,2 1 1,2 1 V. Büge, U. Felzmann, C. Jung, U. Kerzel, 1 1 1 M. Kreps, G. Quast, A. Vest 1 2 DPG Frühjahrstagung March 28 31, 2006 Dortmund

More information

The LCG Distributed Database Infrastructure

The LCG Distributed Database Infrastructure The LCG Distributed Database Infrastructure Dirk Düllmann, CERN & LCG 3D DESY Computing Seminar 21. May 07 CERN - IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Outline of the Talk Why databases

More information

Computing in High- Energy-Physics: How Virtualization meets the Grid

Computing in High- Energy-Physics: How Virtualization meets the Grid Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered

More information

CMS Dashboard of Grid Activity

CMS Dashboard of Grid Activity Enabling Grids for E-sciencE CMS Dashboard of Grid Activity Julia Andreeva, Juha Herrala, CERN LCG ARDA Project, EGEE NA4 EGEE User Forum Geneva, Switzerland March 1-3, 2006 http://arda.cern.ch ARDA and

More information

Tier 1 Services - CNAF to T1

Tier 1 Services - CNAF to T1 CDF Report on Tier 1 Usage Donatella Lucchesi for the CDF Italian Computing Group INFN Padova Outline The CDF Computing Model Tier1 resources usage as today CDF portal for European GRID: lcgcaf People

More information

Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing

Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing Garhan Attebury 1, Andrew Baranovski 2, Ken Bloom 1, Brian Bockelman 1, Dorian Kcira 3, James Letts 4, Tanya Levshina 2,

More information

The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)"

The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud) The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)" J.A. Coarasa " CERN, Geneva, Switzerland" for the CMS TriDAS group." " CHEP2013, 14-18 October 2013, Amsterdam, The Netherlands

More information

An objective comparison test of workload management systems

An objective comparison test of workload management systems An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: sfiligoi@fnal.gov Abstract. The Grid

More information

LHCb activities at PIC

LHCb activities at PIC CCRC08 post-mortem LHCb activities at PIC G. Merino PIC, 19/06/2008 LHCb Computing Main user analysis supported at CERN + 6Tier-1s Tier-2s essentially MonteCarlo production facilities 2 CCRC08: Planned

More information

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:

More information

IT-INFN-CNAF Status Update

IT-INFN-CNAF Status Update IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, 10-11 December 2009 Stefano Zani 10/11/2009 Stefano Zani INFN CNAF (TIER1 Staff) 1 INFN CNAF CNAF is the main computing facility of the INFN Core business:

More information

From raw data to Pbytes on disk The world wide LHC Computing Grid

From raw data to Pbytes on disk The world wide LHC Computing Grid The world wide LHC Computing Grid HAP Workshop Bad Liebenzell, Dark Universe Nov. 22nd 2012 1 KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association

More information

Status of Grid Activities in Pakistan. FAWAD SAEED National Centre For Physics, Pakistan

Status of Grid Activities in Pakistan. FAWAD SAEED National Centre For Physics, Pakistan Status of Grid Activities in Pakistan FAWAD SAEED National Centre For Physics, Pakistan 1 Introduction of NCP-LCG2 q NCP-LCG2 is the only Tier-2 centre in Pakistan for Worldwide LHC computing Grid (WLCG).

More information

How To Use Happyface (Hf) On A Network (For Free)

How To Use Happyface (Hf) On A Network (For Free) Site Meta-Monitoring The HappyFace Project G. Quast, A. Scheurer, M. Zvada CMS Monitoring Review, 16. 17. November 2010 KIT University of the State of Baden-Wuerttemberg and National Research Center of

More information

HADOOP, a newly emerged Java-based software framework, Hadoop Distributed File System for the Grid

HADOOP, a newly emerged Java-based software framework, Hadoop Distributed File System for the Grid Hadoop Distributed File System for the Grid Garhan Attebury, Andrew Baranovski, Ken Bloom, Brian Bockelman, Dorian Kcira, James Letts, Tanya Levshina, Carl Lundestedt, Terrence Martin, Will Maier, Haifeng

More information

Recovery and Backup TIER 1 Experience, status and questions. RMAN Carlos Fernando Gamboa, BNL Gordon L Brown, RAL Meeting at CNAF June 12-1313 of 2007, Bologna, Italy 1 Table of Content Factors that define

More information

HappyFace for CMS Tier-1 local job monitoring

HappyFace for CMS Tier-1 local job monitoring HappyFace for CMS Tier-1 local job monitoring G. Quast, A. Scheurer, M. Zvada CMS Offline & Computing Week CERN, April 4 8, 2011 INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK, KIT 1 KIT University of the State

More information

John Kennedy LMU München DESY HH seminar 18/06/2007

John Kennedy LMU München DESY HH seminar 18/06/2007 ATLAS Data Management in the GridKa Cloud John Kennedy LMU München DESY HH seminar Overview Who am I Cloud Overview DDM Design DDM OPS in the DE Cloud Other issues Conclusion 2 Who Am I and what do I do

More information

Software installation and condition data distribution via CernVM File System in ATLAS

Software installation and condition data distribution via CernVM File System in ATLAS Software installation and condition data distribution via CernVM File System in ATLAS A De Salvo 1, A De Silva 2, D Benjamin 3, J Blomer 4, P Buncic 4, A Harutyunyan 4, A. Undrus 5, Y Yao 6 on behalf of

More information

The Grid-it: the Italian Grid Production infrastructure

The Grid-it: the Italian Grid Production infrastructure n 1 Maria Cristina Vistoli INFN CNAF, Bologna Italy The Grid-it: the Italian Grid Production infrastructure INFN-Grid goals!promote computational grid technologies research & development: Middleware and

More information

Network Performance Optimisation and Load Balancing. Wulf Thannhaeuser

Network Performance Optimisation and Load Balancing. Wulf Thannhaeuser Network Performance Optimisation and Load Balancing Wulf Thannhaeuser 1 Network Performance Optimisation 2 Network Optimisation: Where? Fixed latency 4.0 µs Variable latency

More information

No file left behind - monitoring transfer latencies in PhEDEx

No file left behind - monitoring transfer latencies in PhEDEx FERMILAB-CONF-12-825-CD International Conference on Computing in High Energy and Nuclear Physics 2012 (CHEP2012) IOP Publishing No file left behind - monitoring transfer latencies in PhEDEx T Chwalek a,

More information

Tier-1 Services for Tier-2 Regional Centres

Tier-1 Services for Tier-2 Regional Centres Tier-1 Services for Tier-2 Regional Centres The LHC Computing MoU is currently being elaborated by a dedicated Task Force. This will cover at least the services that Tier-0 (T0) and Tier-1 centres (T1)

More information

Client/Server Grid applications to manage complex workflows

Client/Server Grid applications to manage complex workflows Client/Server Grid applications to manage complex workflows Filippo Spiga* on behalf of CRAB development team * INFN Milano Bicocca (IT) Outline Science Gateways and Client/Server computing Client/server

More information

Introduction to grid technologies, parallel and cloud computing. Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber

Introduction to grid technologies, parallel and cloud computing. Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber Introduction to grid technologies, parallel and cloud computing Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber OUTLINES Grid Computing Parallel programming technologies (MPI- Open MP-Cuda )

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Global Grid User Support - GGUS - within the LCG & EGEE environment

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Global Grid User Support - GGUS - within the LCG & EGEE environment Global Grid User Support - GGUS - within the LCG & EGEE environment Abstract: For very large projects like the LHC Computing Grid Project (LCG) involving some 8,000 scientists from universities and laboratories

More information

Managed Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team

Managed Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team Managed Storage @ GRID or why NFSv4.1 is not enough Tigran Mkrtchyan for dcache Team What the hell do physicists do? Physicist are hackers they just want to know how things works. In moder physics given

More information

SPRACE Site Report. Guilherme Amadio SPRACE UNESP

SPRACE Site Report. Guilherme Amadio SPRACE UNESP SPRACE Site Report Guilherme Amadio SPRACE UNESP Compu5ng resources 144 worker nodes Physical CPUs: 288 Logical CPUs (cores): 1088 HEPSpec06: 13698 02 head nodes CE: HTCondor- CE job gateway and HTCondor

More information

Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC

Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC EGEE and glite are registered trademarks Enabling Grids for E-sciencE Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC Elisa Lanciotti, Arnau Bria, Gonzalo

More information

Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer. Frank Würthwein Rick Wagner August 5th, 2013

Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer. Frank Würthwein Rick Wagner August 5th, 2013 Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer Frank Würthwein Rick Wagner August 5th, 2013 The Universe is a strange place! 67% of energy is dark energy We got no

More information

Big Data and Storage Management at the Large Hadron Collider

Big Data and Storage Management at the Large Hadron Collider Big Data and Storage Management at the Large Hadron Collider Dirk Duellmann CERN IT, Data & Storage Services Accelerating Science and Innovation CERN was founded 1954: 12 European States Science for Peace!

More information

Event Logging and Distribution for the BaBar Online System

Event Logging and Distribution for the BaBar Online System LAC-PUB-8744 August 2002 Event Logging and Distribution for the BaBar Online ystem. Dasu, T. Glanzman, T. J. Pavel For the BaBar Prompt Reconstruction and Computing Groups Department of Physics, University

More information

CMS Monte Carlo production in the WLCG computing Grid

CMS Monte Carlo production in the WLCG computing Grid CMS Monte Carlo production in the WLCG computing Grid J M Hernández 1, P Kreuzer 2, A Mohapatra 3, N De Filippis 4, S De Weirdt 5, C Hof 2, S Wakefield 6, W Guan 7, A Khomitch 2, A Fanfani 8, D Evans 9,

More information

Global Grid User Support - GGUS - in the LCG & EGEE environment

Global Grid User Support - GGUS - in the LCG & EGEE environment Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni (torsten.antoni@iwr.fzk.de) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New

More information

Florida Site Report. US CMS Tier-2 Facilities Workshop. April 7, 2014. Bockjoo Kim University of Florida

Florida Site Report. US CMS Tier-2 Facilities Workshop. April 7, 2014. Bockjoo Kim University of Florida Florida Site Report US CMS Tier-2 Facilities Workshop April 7, 2014 Bockjoo Kim University of Florida Outline Site Overview Computing Resources Site Status Future Plans Summary 2 Florida Tier-2 Paul Avery

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Home Search Collections Journals About Contact us My IOPscience Evolution of Database Replication Technologies for WLCG This content has been downloaded from IOPscience. Please scroll down to see the full

More information

GridKa: Roles and Status

GridKa: Roles and Status GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre

More information

The ENEA-EGEE site: Access to non-standard platforms

The ENEA-EGEE site: Access to non-standard platforms V INFNGrid Workshop Padova, Italy December 18-20 2006 The ENEA-EGEE site: Access to non-standard platforms C. Sciò**, G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, F. Simoni, S. Podda

More information

Distributed Computing for CEPC. YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep.

Distributed Computing for CEPC. YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep. Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep. 12-13, 2014 1 Outline Introduction Experience of BES-DIRAC Distributed

More information

Report from SARA/NIKHEF T1 and associated T2s

Report from SARA/NIKHEF T1 and associated T2s Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch

More information

Plateforme de Calcul pour les Sciences du Vivant. SRB & glite. V. Breton. http://clrpcsv.in2p3.fr

Plateforme de Calcul pour les Sciences du Vivant. SRB & glite. V. Breton. http://clrpcsv.in2p3.fr SRB & glite V. Breton http://clrpcsv.in2p3.fr Introduction Goal: evaluation of existing technologies for data and tools integration and deployment Data and tools integration should be addressed using web

More information

2. COMPUTER SYSTEM. 2.1 Introduction

2. COMPUTER SYSTEM. 2.1 Introduction 2. COMPUTER SYSTEM 2.1 Introduction The computer system at the Japan Meteorological Agency (JMA) has been repeatedly upgraded since IBM 704 was firstly installed in 1959. The current system has been completed

More information

Implementing Offline Digital Video Storage using XenData Software

Implementing Offline Digital Video Storage using XenData Software using XenData Software XenData software manages data tape drives, optionally combined with a tape library, on a Windows Server 2003 platform to create an attractive offline storage solution for professional

More information

Planning Domain Controller Capacity

Planning Domain Controller Capacity C H A P T E R 4 Planning Domain Controller Capacity Planning domain controller capacity helps you determine the appropriate number of domain controllers to place in each domain that is represented in a

More information

ATLAS Software and Computing Week April 4-8, 2011 General News

ATLAS Software and Computing Week April 4-8, 2011 General News ATLAS Software and Computing Week April 4-8, 2011 General News Refactor requests for resources (originally requested in 2010) by expected running conditions (running in 2012 with shutdown in 2013) 20%

More information

Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version

Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version The Particle Physics Experiment Consolidated Grant proposals now being submitted

More information

glite Job Management

glite Job Management glite Job Management Gilberto Diaz Riccardo Di Meo Job Submission Egee submission Resembles very closely the batch submission in a cluster. Can select resources automatically ( Grid like submission). Resources

More information

GridKa site report. Manfred Alef, Andreas Heiss, Jos van Wezel. www.kit.edu. Steinbuch Centre for Computing

GridKa site report. Manfred Alef, Andreas Heiss, Jos van Wezel. www.kit.edu. Steinbuch Centre for Computing GridKa site report Manfred Alef, Andreas Heiss, Jos van Wezel Steinbuch Centre for Computing KIT The cooperation of and Universität Karlsruhe (TH) www.kit.edu KIT? SCC? { = University ComputingCentre +

More information

LCG POOL, Distributed Database Deployment and Oracle Services@CERN

LCG POOL, Distributed Database Deployment and Oracle Services@CERN LCG POOL, Distributed Database Deployment and Oracle Services@CERN Dirk Düllmann, D CERN HEPiX Fall 04, BNL Outline: POOL Persistency Framework and its use in LHC Data Challenges LCG 3D Project scope and

More information

Measurement of BeStMan Scalability

Measurement of BeStMan Scalability Measurement of BeStMan Scalability Haifeng Pi, Igor Sfiligoi, Frank Wuerthwein, Abhishek Rana University of California San Diego Tanya Levshina Fermi National Accelerator Laboratory Alexander Sim, Junmin

More information

RO-11-NIPNE, evolution, user support, site and software development. IFIN-HH, DFCTI, LHCb Romanian Team

RO-11-NIPNE, evolution, user support, site and software development. IFIN-HH, DFCTI, LHCb Romanian Team IFIN-HH, DFCTI, LHCb Romanian Team Short overview: The old RO-11-NIPNE site New requirements from the LHCb team User support ( solution offered). Data reprocessing 2012 facts Future plans The old RO-11-NIPNE

More information

Hadoop Cluster Applications

Hadoop Cluster Applications Hadoop Overview Data analytics has become a key element of the business decision process over the last decade. Classic reporting on a dataset stored in a database was sufficient until recently, but yesterday

More information

Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it

Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it 1 In 2001 the project of the Italian Tier1 in Bologna at CNAF was born. First computers were based on Intel Pentium III

More information

SUPPORT FOR CMS EXPERIMENT AT TIER1 CENTER IN GERMANY

SUPPORT FOR CMS EXPERIMENT AT TIER1 CENTER IN GERMANY The 5th InternaEonal Conference Distributed CompuEng and Grid technologies in Science and EducaEon SUPPORT FOR CMS EXPERIMENT AT TIER1 CENTER IN GERMANY N. Ratnikova, J. Berger, C. Böser, O. Oberst, G.

More information

The glite Workload Management System

The glite Workload Management System Consorzio COMETA - Progetto PI2S2 FESR The glite Workload Management System Annamaria Muoio INFN Catania Italy Annamaria.muoio@ct.infn.it Tutorial per utenti e sviluppo di applicazioni in Grid 16-20 July

More information

Storage Virtualization. Andreas Joachim Peters CERN IT-DSS

Storage Virtualization. Andreas Joachim Peters CERN IT-DSS Storage Virtualization Andreas Joachim Peters CERN IT-DSS Outline What is storage virtualization? Commercial and non-commercial tools/solutions Local and global storage virtualization Scope of this presentation

More information

Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary

Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment

More information

Michael Thomas, Dorian Kcira California Institute of Technology. CMS Offline & Computing Week

Michael Thomas, Dorian Kcira California Institute of Technology. CMS Offline & Computing Week Michael Thomas, Dorian Kcira California Institute of Technology CMS Offline & Computing Week San Diego, April 20-24 th 2009 Map-Reduce plus the HDFS filesystem implemented in java Map-Reduce is a highly

More information

U-LITE Network Infrastructure

U-LITE Network Infrastructure U-LITE: a proposal for scientific computing at LNGS S. Parlati, P. Spinnato, S. Stalio LNGS 13 Sep. 2011 20 years of Scientific Computing at LNGS Early 90s: highly centralized structure based on VMS cluster

More information

Data Management. Issues

Data Management. Issues Data Management Issues Lassi A. Tuura Northeastern University ARDA Workshop 22 June 2004 Part I Data Management at DC04 ARDA Workshop 22 June 2004 DC04 Transfer System Data challenge started in March 2004

More information

WINDOWS SERVER MONITORING

WINDOWS SERVER MONITORING WINDOWS SERVER Server uptime, all of the time CNS Windows Server Monitoring provides organizations with the ability to monitor the health and availability of their Windows server infrastructure. Through

More information

YAN, Tian. On behalf of distributed computing group. Institute of High Energy Physics (IHEP), CAS, China. CHEP-2015, Apr. 13-17th, OIST, Okinawa

YAN, Tian. On behalf of distributed computing group. Institute of High Energy Physics (IHEP), CAS, China. CHEP-2015, Apr. 13-17th, OIST, Okinawa YAN, Tian On behalf of distributed computing group Institute of High Energy Physics (IHEP), CAS, China CHEP-2015, Apr. 13-17th, OIST, Okinawa Distributed computing for BESIII Other experiments wish to

More information

Deploying a distributed data storage system on the UK National Grid Service using federated SRB

Deploying a distributed data storage system on the UK National Grid Service using federated SRB Deploying a distributed data storage system on the UK National Grid Service using federated SRB Manandhar A.S., Kleese K., Berrisford P., Brown G.D. CCLRC e-science Center Abstract As Grid enabled applications

More information

dcache, a managed storage in grid

dcache, a managed storage in grid dcache, a managed storage in grid support and funding by Patrick for the dcache Team Topics Project Topology Why do we need storage elements in the grid world? The idea behind the LCG (glite) storage element.

More information

Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS

Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Forschungszentrum Karlsruhe GmbH Institute for Scientific omputing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten (for the GridKa and GGUS teams)

More information

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working

More information

File Transfer Software and Service SC3

File Transfer Software and Service SC3 File Transfer Software and Service SC3 Gavin McCance JRA1 Data Management Cluster Service Challenge Meeting April 26 2005, Taipei www.eu-egee.org Outline Overview of Components Tier-0 / Tier-1 / Tier-2

More information

Analisi di un servizio SRM: StoRM

Analisi di un servizio SRM: StoRM 27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The

More information

Techniques for implementing & running robust and reliable DB-centric Grid Applications

Techniques for implementing & running robust and reliable DB-centric Grid Applications Techniques for implementing & running robust and reliable DB-centric Grid Applications International Symposium on Grid Computing 2008 11 April 2008 Miguel Anjo, CERN - Physics Databases Outline Robust

More information

The LHC Open Network Environment Kars Ohrenberg DESY Computing Seminar Hamburg, 10.12.2012

The LHC Open Network Environment Kars Ohrenberg DESY Computing Seminar Hamburg, 10.12.2012 The LHC Open Network Environment Kars Ohrenberg DESY Computing Seminar Hamburg, 10.12.2012 LHC Computing Infrastructure > WLCG in brief: 1 Tier-0, 11 Tier-1s, ~ 140 Tier-2s, O(300) Tier-3s worldwide Kars

More information

Object Database Scalability for Scientific Workloads

Object Database Scalability for Scientific Workloads Object Database Scalability for Scientific Workloads Technical Report Julian J. Bunn Koen Holtman, Harvey B. Newman 256-48 HEP, Caltech, 1200 E. California Blvd., Pasadena, CA 91125, USA CERN EP-Division,

More information

Scalable Multi-Node Event Logging System for Ba Bar

Scalable Multi-Node Event Logging System for Ba Bar A New Scalable Multi-Node Event Logging System for BaBar James A. Hamilton Steffen Luitz For the BaBar Computing Group Original Structure Raw Data Processing Level 3 Trigger Mirror Detector Electronics

More information

Beyond High Performance Computing: What Matters to CERN

Beyond High Performance Computing: What Matters to CERN Beyond High Performance Computing: What Matters to CERN Pierre VANDE VYVRE for the ALICE Collaboration ALICE Data Acquisition Project Leader CERN, Geneva, Switzerland 2 CERN CERN is the world's largest

More information

Computing at the HL-LHC

Computing at the HL-LHC Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,

More information

ALICE GRID & Kolkata Tier-2

ALICE GRID & Kolkata Tier-2 ALICE GRID & Kolkata Tier-2 Site Name :- IN-DAE-VECC-01 & IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :- INDIA Vikas Singhal VECC, Kolkata Events at LHC Luminosity : 10 34 cm -2 s -1 40 MHz every

More information

Virtualization Infrastructure at Karlsruhe

Virtualization Infrastructure at Karlsruhe Virtualization Infrastructure at Karlsruhe HEPiX Fall 2007 Volker Buege 1),2), Ariel Garcia 1), Marcus Hardt 1), Fabian Kulla 1),Marcel Kunze 1), Oliver Oberst 1),2), Günter Quast 2), Christophe Saout

More information