Service Challenge Tests of the LCG Grid
|
|
|
- Mervyn Warren
- 10 years ago
- Views:
Transcription
1 Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources In particular I have used slides from presentations of LHC experiments and LHC Computing Grid
2 CERN Large Hadron Collider CMS European Organisation for Nuclear Reseach LHCb Atlas Alice CGW05 Andrzej Olszewski 2
3 Experiment The most powerful microscope CGW05 Andrzej Olszewski 3
4 Data Reduction Data preselection in real time - many different physics processes - several levels of filtering - high efficiency for events of interest - total reduction factor of about 10 7 ~1 GHz interaction rate Level 1 Special Hardware 75 khz fully digitized Level 2 Embedded processors/farm 2 khz Level 3 Farm of commodity CPU 100 Hz Data Recording & Offline Analysis 2 µs 10 ms ~2 s Target processing time CGW05 Andrzej Olszewski 4
5 Data Rates Rate [Hz] RAW [MB] ESD Reco [MB] AOD [kb] Monte Carlo [MB/evt] Monte Carlo % of real ALICE HI ALICE pp ATLAS CMS LHCb days running in seconds/year pp from 2008 on ~10 9 events/experiment 10 6 seconds/year heavy ion CGW05 Andrzej Olszewski 5
6 Mountains of CPU & Disks For LHC computing, 100M SpecInt2000 or 100K of 3GHz Pentium 4 is needed! For data storage, 20 Peta Bytes or 100K of disks/tapes per year is needed! At CERN currently: ~2,400 processors ~2 Peta Bytes of disk ~12 PB of magnetic tape Even with technology- driven improvements in performance and costs CERN can provide nowhere near enough capacity for LHC! CGW05 Andrzej Olszewski 6
7 Large, distributed community ~5000 Physicists around the world - around the clock Offline software effort: 1000 person-years per experiment Da Major challenges: Communication and collaboration at a distance Distributed computing resources Remote software development and physics analysis CGW05 Andrzej Olszewski 7
8 The LCG Project Objectives Design, prototyping and implementation of a computing environment for LHC experiments: - Infrastructure (for HEP it is effective to use PC farms) - Middleware (based on EDG, VDT, glite.) - operations (experiment VOs, operation and support centres) Approved by the CERN Council in September 2001 Phase 1 ( ): Development of a distributed production prototype that will be operated as a platform for the data challenges Phase 2 ( ): Installation and operation of the full world-wide initial production Grid system, requiring continued manpower efforts and substantial material resources. CGW05 Andrzej Olszewski 8
9 Grid Foundation Projects Globus ivdgl Open Science Grid PPDG GriPhyN Condor Grid3 Globus, Condor and VDT have provided key components of the middleware used. DataTag GridPP LCG cooperates with other Grid projects. Key members participate in OSG and EGEE NorduGrid EU DataGrid EGEE CrossGrid INFN Grid CGW05 Andrzej Olszewski 9
10 LCG Hierarchical Model Uni x Lab m Tier3 Physics Department Desktop γ Tier2 Lab a Lab b USA Brookhaven USA FermiLab The LHC Computing Facility Italy Tier 1 NL CERN Tier0 UK France Tier1 Germany Lancs Lab c Uni n β α Uni y Uni b CGW05 Andrzej Olszewski 10
11 LCG Hierarchical Model Tier-0 at CERN Record RAW data (1.25 GB/s ALICE) Distribute second copy to Tier-1s Calibrate and do first-pass reconstruction Tier-1 centers (11 defined) Manage permanent storage RAW, simulated, processed Capacity for reprocessing, bulk analysis Tier-2 centers (> 100 identified) Monte Carlo event simulation End-user analysis Tier-3 Facilities at universities and laboratories Access to data and processing in Tier-2s, Tier-1s CGW05 Andrzej Olszewski 11
12 Architecture Tier0 Gigabit Ethernet Ten Gigabit Ethernet Double ten gigabit Ethernet 10 Gb/s to 32 1 Gb/s WAN WAN 2.4 Tb/s CORE Campus Campus network network Experimental Experimental areas areas Distribution layer ~6000 CPU servers x 8000 SPECINT2000 (2008) ~2000 Tape and Disk servers CGW05 Andrzej Olszewski 12
13 Network Connectivity National Reasearch Networks (NRENs) at Tier-1s: ASnet LHCnet/ESnet GARR LHCnet/ESnet RENATER DFN SURFnet6 NORDUnet RedIRIS UKERNA CANARIE Tier-2 Any Tier-2 may access data at any Tier-1 Tier-2 BNL Tier-2 IN2P3 Tier-2s and Tier-1s are inter-connected by the general purpose research networks GridKa TRIUMF ASCC Tier-2 Tier-2 Tier-2 Tier-2 Nordic FNAL Tier-2 CNAF Tier-2 Tier-2 SARA PIC RAL CGW05 Andrzej Olszewski 13
14 Service Challanges LCG Service Challenges are about preparing, hardening and delivering the production LHC Computing Environment. Data recording CERN must be capable of accepting data from the experiments and recording it at a long term sustained average rate of GBytes/sec Service Challenge 1-2 Demonstrate reliable file transfer, disk to disk, between CERN and Tier-1 centres, sustaining for one week an aggregate throughput of 500 MBytes/sec at CERN. Service Challenge 3 Operate a reliable base service including most of the Tier-1 centres and some Tier-2s. Grid data throughput 1GB/sec, including mass storage 500 MB/sec (150 MB/sec & 60 MB/sec at Tier-1s). Service Challenge 4 Demonstrate that all of the offline data processing requirements expressed in the experiments Computing Models, from raw data taking through to analysis, can be handled by the Grid at the full nominal data rate of the LHC CGW05 Andrzej Olszewski 14
15 Schedules Sep05 - SC3 Service Phase May06 SC4 Service Phase Sep06 Initial LHC Service in stable operation Apr07 LHC Service commissioned SC3 SC4 LHC Service Operation cosmics First First beams physics Full physics run SC3 Currently finishing throughput phase, testing basic experiment software chains SC4 All Tier-1s, major Tier-2s sustain nominal final grid data throughput (~ 1.5 GB/sec) LHC Service in Operation September 2006 ramp up to full operational capacity by April 2007 CGW05 Andrzej Olszewski 15
16 SC2 - Throughput to Tier1 from CERN Has to use multiple TCP streams and multiple file transfers to fill up network pipe CGW05 Andrzej Olszewski 16
17 SC3 Throughput Tests Site MoU Target (Tape) ASGC 100 Aver. MB/s (Disk) 10 BNL FNAL GridKa CC-IN2P CNAF NDGF PIC RAL NIKHEF TRIUMF All Tier0 participating UsingSRM interface July - low transfer rates and poor reliability of transfers between T0-T1 running at ~half the target of 1GB/s with poor stability T1-T2 transfers at much lower rates on target Since then a better performance has been seen after resolving FTS and Castor problems at CERN CGW05 Andrzej Olszewski 17
18 Baseline Services Priority A: High priority and mandatory Priority B: Standard solutions are required, but experiments could select different implementations Priority C: Desirable to have a common solution, but not essential CGW05 Andrzej Olszewski 18
19 LCG/EGEE Services in SC3 Basic services (CE, SE,...) New LCG/gLite service components tested in SC3 SRM Storage element at T0, T1 and T2 SRM 1.1 interface to provide managed storage FTS server at T0 and T1 T0 and T1 to provide (reliable) File Transfer Service LFC catalog at T0, T1 and T2 Local catalogs to provide information about location of experiments files and datasets VOBOX at T0, T1 and T2 For running experiment specific agents at a site CGW05 Andrzej Olszewski 19
20 SRM Service SRM v1.1 insufficient Volatile, Permanent space Global space reservation: reserve, release, update (mandatory LHCb, useful ATLAS,ALICE). Permissions on directories mandatory Prefer based on roles and not DN (SRM integrated with VOMS desirable) Directory functions (except mv) Pin/unpin capability Relative paths in SURL important for ATLAS, LHCb, not for ALICE CGW05 Andrzej Olszewski 20
21 FTS Service First require base storage and transfer infrastructure (gridftp, SRM) to become available at high priority and to demonstrate sufficient quality of service Reliable transfer layer is valuable The glite FTS seems to satisfy current requirements Experiments plan on integrating with FTS as an underlying service to their own file transfer and data placement services Interaction with fts (e.g catalog access) can be implemented either in the experiment layer or integrating into FTS workflow Regardless of transfer system deployed need for experimentspecific components to run at both Tier1 and Tier2 Without a general service, inter-vo scheduling, bandwidth allocation, prioritisation, rapid address of security issues etc. would be difficult CGW05 Andrzej Olszewski 21
22 Local File Catalog File Catalogues provide the mapping of Logical file names to GUID and Storage locations (SURL). Experiments need hierarchical name space (directories) Need some form of a collection notion (datasets, fileblocks, ) Need to have role-based security (admin, production, etc.) Support bulk-operations: Dump entire catalog Interfaces are required to: POOL, Posix-like I/O service, WMS (e.g. Data Location Interface/Storage Index interfaces) LCG File Catalog fixes performance and scalability problems seen in EDG Catalogs and provides most of the required functionality Experiments rely on grid catalogs for locating files and datasets Experiment dependent information is in experiment catalogues CGW05 Andrzej Olszewski 22
23 VOBox The VOBox is a machine where permanent-running processes (agents or services) can be deployed and where the required security, logging & monitoring can be incorporated. The VOBox provides a way to deploy VO specific upper layer middleware on the Site with the aim of filling the gap between existing LCG middleware and the VO needs. The VOBox is not to by-pass current middleware deployment but to strengthen & enhance it to meet the experiment specific requirements. CGW05 Andrzej Olszewski 23
24 Experiment Integration LHCb Architecture for using FTS LHCb LCG Central Data Movement model based at CERN. FTS+TransferAgent+ RequestDB TransferAgent+ReqDB developed for this purpose. Transfer Agent run on LHCb managed lxgate class machine File (Replica) Catalog Transfer Agent Re que s t DB Transfer Manager Interface File Transfer Service Transfer network Tier0 SE Tier1 SE A Tier1 SE B Tier1 SE C CGW05 Andrzej Olszewski 24
25 ALICE Workload Management ALICE Job Catalogue Job 1.1 Job 1.2 Job 1.3 Job 2.1 Job 2.1 Job 3.1 Job 3.2 lfn1 lfn2 lfn3, lfn4 lfn1, lfn3 lfn2, lfn4 lfn1, lfn3 lfn2 Optimizer Knows close SE s Matchmakes ALICE File Catalogue lfn guid {se s} lfn guid {se s} lfn guid {se s} lfn guid {se s} lfn guid {se s} Registers output Asks work-load ALICE central services Execs agent Yes User Site Env OK? VO-Box LCG User Job ALICE catalogues No Die with grace Updates TQ Retrieves workload Receives work-load Sends job result CE agent Submits job agent RB Sends job agent to site CE WN CGW05 Andrzej Olszewski 25
26 Service Level Definition Class Description Downtime Reduced Degraded Availability C Critical 1 hour 1 hour 4 hours 99% H High 4 hours 6 hours 6 hours 99% M Medium 6 hours 6 hours 12 hours 99% L Low 12 hours 24 hours 48 hours 98% U Unmanaged None None None None Downtime defines the time between the start of the problem and restoration of service at minimal capacity (i.e. basic function but capacity < 50%) Reduced defines the time between the start of the problem and the restoration of a reduced capacity service (i.e. >50%) Degraded defines the time between the start of the problem and the restoration of a degraded capacity service (i.e. >80%) Availability defines the sum of the time that the service is down compared with the total time during the calendar period for the service. Site wide failures are not considered as part of the availability calculations. 99% means a service can be down up to 3.6 days a year in total. 98% means up to a week in total. None means the service is running unattended CGW05 Andrzej Olszewski 26
27 Example Services & Service Levels Service Service Level Runs Where Resource Broker Critical Main sites Compute Element High All sites MyProxy Critical BDII Critical Global R-GMA High LFC High All sites (ATLAS, ALICE) CERN (LHCb) FTS High T0, T1s (except FNAL) SRM Critical All sites This list needs to be completed and verified Then timescales for achieving the necessary service levels need to be agreed CGW05 Andrzej Olszewski 27
28 Building WLCG Service All services required to handle production data flows now deployed at all Tier1s and participating Tier2s Bring the remaining Tier2 centres into the process Getting the (stable) data rates up to the target Identify the additional Use Cases and functionality Bring core services up to robust 24 x 7 production standard required Need to use existing experience and technology... Monitoring, alarms, operators, SMS to 2 nd / 3 rd level support (Re-)implement Required Services at Sites so that they can meet MoU Targets Measured through Site Functional Tests Delivered Availability, maximum intervention time etc. Goal is to build a cohesive service out of a large distributed community CGW05 Andrzej Olszewski 28
29 Building WLCG Service Tier-1 Evolution Tier-2 Growth MSI2K PetaBytes. MSI2K PetaBytes Year Year 0 CPU (MSI2K) Disk (PB) Tape (PB) CPU (M SI2K) Disk (PB) CGW05 Andrzej Olszewski 29
30 Extra CGW05 Andrzej Olszewski 30
31 VOBox Services: ALICE AliEn and monitoring agents and services running on the VO node: AliEn Computing Element (CE) (Interface to LCG RB) Storage Element Service (SES) interface to local storage (via SRM or directly) File Transfer Daemon (FTD) scheduled file transfers agent (possibly using FTS implementation) Cluster Monitor (CM) local queue monitoring MonALISA general monitoring agent PackMan (PM) software distribution and management xrootd application file access Agent Monitoring (AmOn) CGW05 Andrzej Olszewski 31
32 Polish LCG/EGEE Centres Cracow: CYFRONET Academic Computer Centre Warsaw: ICM Interdisciplinary Centre for Mathematical and Computational Modelling Poznań PCSS Poznań Supercomputing and Networking Centre CGW05 Andrzej Olszewski 32
33 Polish Network GDAŃSK KOSZALIN OLSZTYN GÉANT SZCZECIN BYDGOSZCZ POZNAŃ TORUŃ BIAŁYSTOK BASNET 34 Mb/s ZIELONA GÓRA ŁÓDŹ WARSZAWA WROCŁAW CZĘSTOCHOWA RADOM OPOLE KIELCE PUŁAWY LUBLIN 10 Gb/s KATOWICE 1 Gb/s KRAKÓW BIELSKO-BIAŁA RZESZÓW Local network CESNET, SANET CGW05 Andrzej Olszewski 33
34 Polish Tier2 Poland is a federated Tier-2 HEP LHC community: ~60 people Each of the computing centres naturally will support mainly 1 experiment Cracow ATLAS Warsaw CMS Poznań ALICE Currently setting up for participation in SC3, SC4 In the future each centre will probably be about 1/3 of the average/small Tier2 for a given experiment CGW05 Andrzej Olszewski 34
LHC GRID computing in Poland
POLAND LHC GRID computing in Poland Michał Turała IFJ PAN/ ACK Cyfronet AGH, Kraków Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa, 25.10.2007 21.04.2008 1 Outline Computing needs
Grid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
LHC schedule: what does it imply for SRM deployment? [email protected]. CERN, July 2007
WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? [email protected] WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.
Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015
(Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:
KMD National Data Storage
KMD National Data Storage in the PIONIER network Maciej Brzezniak, Norbert Meyer, Rafał Mikołajczak Maciej Stroiński Location igrid2005, Sept. 27th, 2005 SZCZE KOSZAL P GDAŃSK IN BYD GOSZCZ OZNAŃ TORUŃ
Tier0 plans and security and backup policy proposals
Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle
PIONIER the national fibre optic network for new generation services Artur Binczewski, Maciej Stroiński Poznań Supercomputing and Networking Center
PIONIER the national fibre optic network for new generation services Artur Binczewski, Maciej Stroiński Poznań Supercomputing and Networking Center e-irg Workshop October 12-13, 2011; Poznań, Poland 18th
Status and Evolution of ATLAS Workload Management System PanDA
Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The
CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006
CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 1 Introduction Different h/w used for GRID services Various techniques & First
ALICE GRID & Kolkata Tier-2
ALICE GRID & Kolkata Tier-2 Site Name :- IN-DAE-VECC-01 & IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :- INDIA Vikas Singhal VECC, Kolkata Events at LHC Luminosity : 10 34 cm -2 s -1 40 MHz every
Poland. networking, digital divide andgridprojects. M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland
Poland networking, digital divide andgridprojects M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland M. Turala The Henryk Niewodniczanski Instytut of Nuclear Physics PAN and ACK
The glite File Transfer Service
Enabling Grids Enabling for E-sciencE Grids for E-sciencE The glite File Transfer Service Paolo Badino On behalf of the JRA1 Data Management team EGEE User Forum - CERN, 2 Mars 2006 www.eu-egee.org Outline
Linux and the Higgs Particle
Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004 Outline What is CERN The Physics The Physics Tools The Accelerator The
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
CMS Dashboard of Grid Activity
Enabling Grids for E-sciencE CMS Dashboard of Grid Activity Julia Andreeva, Juha Herrala, CERN LCG ARDA Project, EGEE NA4 EGEE User Forum Geneva, Switzerland March 1-3, 2006 http://arda.cern.ch ARDA and
Advanced Service Platform for e-science. Robert Pękal, Maciej Stroiński, Jan Węglarz (PSNC PL)
Advanced Service Platform for e-science Robert Pękal, Maciej Stroiński, Jan Węglarz (PSNC PL) PLATON Service Platform for e-science Operational Programme: Innovative Economy 2007-2013 Investments in development
HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS EKSPERYMENTY FIZYKI WYSOKICH ENERGII W SIECIACH KOMPUTEROWYCH GRID. 1.
Computer Science Vol. 9 2008 Andrzej Olszewski HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS The demand for computing resources used for detector simulations and data analysis in High Energy
ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC
ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC Wensheng Deng 1, Alexei Klimentov 1, Pavel Nevski 1, Jonas Strandberg 2, Junji Tojo 3, Alexandre Vaniachine 4, Rodney
Computing at the HL-LHC
Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,
The Grid-it: the Italian Grid Production infrastructure
n 1 Maria Cristina Vistoli INFN CNAF, Bologna Italy The Grid-it: the Italian Grid Production infrastructure INFN-Grid goals!promote computational grid technologies research & development: Middleware and
HADOOP, a newly emerged Java-based software framework, Hadoop Distributed File System for the Grid
Hadoop Distributed File System for the Grid Garhan Attebury, Andrew Baranovski, Ken Bloom, Brian Bockelman, Dorian Kcira, James Letts, Tanya Levshina, Carl Lundestedt, Terrence Martin, Will Maier, Haifeng
CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT
SS Data & Storage CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT HEPiX Fall 2012 Workshop October 15-19, 2012 Institute of High Energy Physics, Beijing, China SS Outline
CMS Tier-3 cluster at NISER. Dr. Tania Moulik
CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb
Batch and Cloud overview Andrew McNab University of Manchester GridPP and LHCb Overview Assumptions Batch systems The Grid Pilot Frameworks DIRAC Virtual Machines Vac Vcycle Tier-2 Evolution Containers
An objective comparison test of workload management systems
An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: [email protected] Abstract. The Grid
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version The Particle Physics Experiment Consolidated Grant proposals now being submitted
Chapter 12 Distributed Storage
Chapter 12 Distributed Storage 1 2 Files File location and addressing What is a file? Normally we collapse. Concepts: name; contents; gui. What about the backup of this file? How do we distinguish? File
GridKa: Roles and Status
GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre
Managed Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team
Managed Storage @ GRID or why NFSv4.1 is not enough Tigran Mkrtchyan for dcache Team What the hell do physicists do? Physicist are hackers they just want to know how things works. In moder physics given
Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing
Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing Garhan Attebury 1, Andrew Baranovski 2, Ken Bloom 1, Brian Bockelman 1, Dorian Kcira 3, James Letts 4, Tanya Levshina 2,
Evolution of Database Replication Technologies for WLCG
Home Search Collections Journals About Contact us My IOPscience Evolution of Database Replication Technologies for WLCG This content has been downloaded from IOPscience. Please scroll down to see the full
TELEMEDICINE in POLAND
TELEMEDICINE in POLAND Antoni Nowakowski Department of Biomedical Engineering Gdansk University of Technology Gdańsk, POLAND Ministry of Science and Higher Education Committee of Information Infrastructure
Roberto Barbera. Centralized bookkeeping and monitoring in ALICE
Centralized bookkeeping and monitoring in ALICE CHEP INFN 2000, GRID 10.02.2000 WP6, 24.07.2001 Roberto 1 Barbera ALICE and the GRID Phase I: AliRoot production The GRID Powered by ROOT 2 How did we get
ATLAS GridKa T1/T2 Status
ATLAS GridKa T1/T2 Status GridKa TAB, FZK, 19 Oct 2007 München GridKa T1/T2 status Production and data management operations Computing team & cloud organization T1/T2 meeting summary Site monitoring/gangarobot
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL
Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive
Virtualisation Cloud Computing at the RAL Tier 1. Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013
Virtualisation Cloud Computing at the RAL Tier 1 Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013 Virtualisation @ RAL Context at RAL Hyper-V Services Platform Scientific Computing Department
The GENIUS Grid Portal
The GENIUS Grid Portal (*) work in collaboration with A. Falzone and A. Rodolico EGEE NA4 Workshop, Paris, 18.12.2003 CHEP 2000, 10.02.2000 Outline Introduction Grid portal architecture and requirements
EDG Project: Database Management Services
EDG Project: Database Management Services Leanne Guy for the EDG Data Management Work Package EDG::WP2 [email protected] http://cern.ch/leanne 17 April 2002 DAI Workshop Presentation 1 Information in
Data storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
ATLAS job monitoring in the Dashboard Framework
ATLAS job monitoring in the Dashboard Framework J Andreeva 1, S Campana 1, E Karavakis 1, L Kokoszkiewicz 1, P Saiz 1, L Sargsyan 2, J Schovancova 3, D Tuckett 1 on behalf of the ATLAS Collaboration 1
PoS(EGICF12-EMITC2)110
User-centric monitoring of the analysis and production activities within the ATLAS and CMS Virtual Organisations using the Experiment Dashboard system Julia Andreeva E-mail: [email protected] Mattia
DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group
DSS High performance storage pools for LHC Łukasz Janyst on behalf of the CERN IT-DSS group CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Introduction The goal of EOS is to provide a
ARDA Experiment Dashboard
ARDA Experiment Dashboard Ricardo Rocha (ARDA CERN) on behalf of the Dashboard Team www.eu-egee.org egee INFSO-RI-508833 Outline Background Dashboard Framework VO Monitoring Applications Job Monitoring
Global Grid User Support - GGUS - in the LCG & EGEE environment
Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni ([email protected]) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New
Bob Jones Technical Director [email protected]
Bob Jones Technical Director [email protected] CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST-2003-508833 EGEE Goal & Strategy Goal: Create a wide
Grid Activities in Poland
Grid Activities in Poland Jarek Nabrzyski Poznan Supercomputing and Networking Center [email protected] Outline PSNC National Program PIONIER Sample projects: Progress and Clusterix R&D Center PSNC was
Database Services for Physics @ CERN
Database Services for Physics @ CERN Deployment and Monitoring Radovan Chytracek CERN IT Department Outline Database services for physics Status today How we do the services tomorrow? Performance tuning
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment
Client/Server Grid applications to manage complex workflows
Client/Server Grid applications to manage complex workflows Filippo Spiga* on behalf of CRAB development team * INFN Milano Bicocca (IT) Outline Science Gateways and Client/Server computing Client/server
EGEE is a project funded by the European Union under contract IST-2003-508833
www.eu-egee.org NA4 Applications F.Harris(Oxford/CERN) NA4/HEP coordinator EGEE is a project funded by the European Union under contract IST-2003-508833 Talk Outline The basic goals of NA4 The organisation
An approach to grid scheduling by using Condor-G Matchmaking mechanism
An approach to grid scheduling by using Condor-G Matchmaking mechanism E. Imamagic, B. Radic, D. Dobrenic University Computing Centre, University of Zagreb, Croatia {emir.imamagic, branimir.radic, dobrisa.dobrenic}@srce.hr
