GridKa: Roles and Status

Size: px
Start display at page:

Download "GridKa: Roles and Status"

Transcription

1 GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D Karlsruhe, Germany Holger Marten

2 History 10/2000: First ideas about a German Regional Centre for LHC Computing - planning and cost estimates 05/2001: Start a BaBar-Tier-B with Univ. Bochum, Dresden, Rostock 07/2001: German HEP communities send Requirements for a Regional Data and Computing Centre in Germany (RDCCG) - more planning and cost estimates 12/2001: Launching committee establishes RDCCG (renamed to Grid Computing Centre Karlsruhe, GridKa later) 04/2002: First prototype 10/2002: GridKa Inauguration meeting

3 High Energy Physics experiments served by GridKa Atlas (SLAC, USA) u p m o id C y today r G to ad e d r e l t a it m ta a m o d C l rea e v Ha (FNAL,USA) LHC experiments (FNAL,USA) ting (CERN) non-lhc experiments Other sciences later

4 GridKa Project Organization Technical Advisory Board Overview Board Board Alice Atlas CMS LHCb BaBar CDF D0 Compass Physics Committees DESY Project Leader GridKa Planning Development Technical realization Operation BMBF Physics Committees HEP Experiments LCG FZK Management Head FZK Comp. Centre Chairman of TAB Project Leader

5 Aachen (4) Bielefeld (2) Bochum (2) Bonn (3) Darmstadt (1) Dortmund (1) Dresden (2) Erlangen (1) Frankfurt (1) Freiburg (2) Hamburg (1) Heidelberg (1) (6) Karlsruhe (2) Mainz (3) Mannheim (1) München (1) (5) Münster (1) Rostock (1) Siegen (1) Wuppertal (2) German Users of GridKa 22 institutions 44 user groups 350 scientists

6 GridKa in the network of international Tier-1 centres France: Germany: Italy: Japan: Spain: Switzerland: Taiwan: UK: USA: USA: IN2P3, Lyon CNAF, Bologna ICEPP, University Tokio PIC, Barcelona CERN, Genf Academia Sinica, Taipei Rutherford Laboratory, Chilton Fermi Laboratory, Batavia, IL BNL Warning: List not fixed.

7 The fifth LHC subproject Lab z Uni e The global LHC Computing Centre ATLAS Virtual Organizations Lab y USA (Fermi, BNL) Tier 3 Tier 1 (Institute Tier 2 computers) (Uni-CCs, Lab-CCs) Uni d UK (RAL) Uni b CERN Tier 0 LHCb.. Lab x Uni a France (IN2P3) Italy (CNAF) Tier 4 (Desktop) CERN Tier 1 CMS Germany (FZK) Lab i Working Groups Uni c Tier 0 Centre at CERN

8 desktops portables Santiago RAL small Tier-2 Weizmann centres Forschungszentrum Karlsruhe Tier-1 MSU IC IN2P3 IFCA UB FNAL Cambridge LHC Computing Model (simplified!!) Tier-0 the accelerator centre Filter raw data Reconstruction summary data (ESD) Record raw data and ESD Distribute raw and ESD to Tier-1 CNAF Budapest Prague FZK Taipei PIC TRIUMF Legnaro ICEPP CSCS Rome CIEMAT Krakow NIKHEF USC Tier-1 Les Robertson, GDB, May 2004 BNL Permanent storage and management of raw, ESD, calibration data, meta- online to data acquisition process data, analysis data and databases -- high availability (24h x7d) grid-enabled data service -- managed mass storage Data-heavy analysis -- long-term commitment Re-processing raw ESD -- resources: 50% of average National, regional support Tier-1

9 Tier-2 Well-managed disk storage grid-enabled Simulation End-user analysis batch and interactive High performance parallel analysis (PROOF?) MSU IC IN2P3 IFCA UB FNAL Cambridge CNAF Budapest Prague FZK Taipei PIC TRIUMF Legnaro ICEPP CSCS Rome Each Tier-2 is associated with a Tier-1 that Serves as the primary data source Takes responsibility for long-term storage and management of all of the data generated at the Tier-2 (grid-enables mass storage) May also provide other support services (grid expertise, software distribution, maintenance, ) BNL CERN will not provide these services for Tier-2s GridKa School 2004, September 20-23, 2004, Karlsruhe, Germany except by special arrangement CIEMAT Krakow NIKHEF USC Les Robertson, GDB, May 2004 desktops portables Santiago RAL small Tier-2 Weizmann centres Forschungszentrum Karlsruhe Tier-1

10 GridKa planned resources Tbyte 6000 Jan CPU Disk Tape LCG Phase I Phase II Phase III ksi

11 Distribution of planned resources at GridKa 100% 80% 60% 40% 20% 0% 100% 80% 60% 40%!! C H on-l non-lhc n o t ns o i t u rib t LHC n o c t n a c re2007 t A ifi 2004 n e 2002 ign r l C S Tie a n r o i a eg ab R B F non-lhc CD, 0 D 20% 80% 60% Jan Disk LHC 0% 100% 2008 CPU non-lhc Tape 40% 20% LHC 0%

12 GridKa Environment

13 IW R 441,442 Main building Tape Storage

14 Worker Nodes & Test beds Production environment 97x dual PIII, 1,26 GHz 97 ksi x dual PIV, 2,2 GHz 102 ksi x dual PIV, 2,667 GHz 130 ksi x dual PIV, 3,06 GHz 534 ksi x dual Opteron ksi GB mem, 40 GB HD 1 GB RAM, 40 GB HD 1 GB RAM, 40 GB HD 1 GB RAM, 40/80 GB HD 2 GB RAM, 80 GB HD Σ 536 nodes, 1072 CPUs, 953 ksi2000 installed with RH7.3, LCG (except for Opterons) Test environment additional 30 machines in several test beds Next OS Scientific Linux if middleware and applications are ready

15 PBSPro fair share according to requirements experiment ksi2000 share percentage Alice Atlas CMS LHCb BaBar CDF Dzero Compass % LHC 55 % nlhc 1-oct-2004 The default (test) queue is not handled by the fair share. These CPUs are kept free for test jobs.

16 Disk Space available for HEP experiments: 202 TB Oct % LHC 71 % nlhc Compass D0 CDF BaBar LHCb CMS ATLAS 0 ALICE TByte 40

17 Online Storage I about 40 TB stored in NAS (better: DAS) dual CPU, 16 EIDE disks, 3Ware controller Experience hardware cheap, but not very reliable RAID software & management messages not always useful good throughput for a few simultaneous jobs, but doesn t scale to a few hundred simultaneous file accesses Workarounds disk mirroring management software ( managed disks ): file copies on multiple boxes) more reliable disks + parallel file system

18 Online Storage: I/O Design with NAS (DAS) Compute nodes TCP/IP/NFS Expansion ~ 30 MB/s r/w bottleneck disk access bottleneck Alice Atlas

19 Online Storage II about 160 TB stored in a SAN SCSI disks (rpm 10k) with redundant controllers parallel file system on a file server cluster exported via NFS on a cluster of file server to the WNs

20 Online Storage: Scalable I/O Design Compute nodes TCP/IP/NFS Expansion file server cluster SAN/SCSI Fibre Channel Alice Atlas RAID 5 storage striping + parallel file system; MB/s I/O measured

21 Online Storage II about 160 TB stored in a SAN SCSI disks (rpm 10k) with redundant controllers parallel file system on a file server cluster exported via NFS on a cluster of file server to the WNs Advantages high availability through multiple redundant servers load balancing via automounter program map Experience many teething problems (bugs, learn how to configure,...) ratio (CPU/Wall clock) near to 1 in some applications more expensive > next try cheaper S-ATA systems

22 Why telling all this? Because we need your experience and feedback as users!

23 Tape Space available for HEP experiments: 374 TB Oct % LHC 73 % nlhc Compass D0 CDF BaBar LHCb CMS ATLAS 0 ALICE TByte 80

24 Tape Storage tape library IBM 3584 LTO Ultrium 8 drives LTO-1, 4 drives LTO TB native (uncompressed) Tivoli Storage Manager (TSM) for Backup and Archive installation of dcache in progress - tape backend interfaced to Tivoli Storage Manager - installation with 1 head and 3 pool nodes currently tested by CMS & CDF other - SAM station caches for D0 and CDF - JIM (Job information management) station for D0 - tape connection via scripts (D0) - CORBA Naming service (for CDF)

25 GridKa Plan for WAN connectivity 2 Gbps 155 Mbps Start 10 Gbps tests 10 Gbps 20 Gbps Start discussion with Dante! 34 Mbps Sept 2004 DFN upgraded the capacity from Karlsruhe to Géant to 10 Gbps; tests have been started! Routing (full 10 Gbps): GridKa DFN (Karlsruhe) DFN (Frankfurt) Géant (Frankfurt) Géant (Milano) Géant (Geneva) CERN

26 Further services & sources of information

27 GGUS (Global Grid User Support)

28 User information GridKa Info - user registration globus installation batch system PBS backup & archive getting a certificate from GermanGrid CA listserver / mailing lists monitoring status with Ganglia HEP experiments - experiment specific information - FAQ Documentaion...

29 Tools gridmon.fzk.de/ganglia

30 Final remarks

31 Europe on the way to e-science EU-Project EGEE April 2004 to March Mio. Euro f. personnel Russland 70 partner institutes in 27 countries organized in 9 federations applications LHC grid, Biomed,... Op Co Re Provide distributed European research communities with a common market of computing, offering round-the-clock access to major computing resources, independent of geographic location,..

32 Status of LCG / EGEE

33 Last but not least We want to help - our users on our systems - support/discuss cluster installations at other institutes - support/discuss middleware installations at other centres - creating a German Grid Infrastructure and... We will continue the balancing act between - testing & Data Challanges - production with real data

34 No equipment without people. Thanks! We appreciate the continuous interest and support by the Federal Ministry of Education and Research, BMBF.

Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS

Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Forschungszentrum Karlsruhe GmbH Institute for Scientific omputing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten (for the GridKa and GGUS teams)

More information

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH

More information

The GridKa Installation for HEP Computing

The GridKa Installation for HEP Computing The GridKa Installation for HEP Computing Forschungszentrum Karlsruhe GmbH Central Information and Communication Technologies Department Hermann-von-Helmholtz-Platz 1 D-76344 Eggenstein-Leopoldshafen Holger

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Global Grid User Support - GGUS - within the LCG & EGEE environment

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Global Grid User Support - GGUS - within the LCG & EGEE environment Global Grid User Support - GGUS - within the LCG & EGEE environment Abstract: For very large projects like the LHC Computing Grid Project (LCG) involving some 8,000 scientists from universities and laboratories

More information

Tier0 plans and security and backup policy proposals

Tier0 plans and security and backup policy proposals Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle

More information

EGEE is a project funded by the European Union under contract IST-2003-508833

EGEE is a project funded by the European Union under contract IST-2003-508833 www.eu-egee.org NA4 Applications F.Harris(Oxford/CERN) NA4/HEP coordinator EGEE is a project funded by the European Union under contract IST-2003-508833 Talk Outline The basic goals of NA4 The organisation

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. GridKa User Meeting

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. GridKa User Meeting GridKa User Meeting Forschungszentrum Karlsruhe GmbH Central Information and Communication Technologies Department Hermann-von-Helmholtz-Platz 1 D-76344 Eggenstein-Leopoldshafen Dr. Holger Marten http://grid.fzk.de

More information

Linux and the Higgs Particle

Linux and the Higgs Particle Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004 Outline What is CERN The Physics The Physics Tools The Accelerator The

More information

Global Grid User Support - GGUS - in the LCG & EGEE environment

Global Grid User Support - GGUS - in the LCG & EGEE environment Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni (torsten.antoni@iwr.fzk.de) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New

More information

Integrating a heterogeneous and shared Linux cluster into grids

Integrating a heterogeneous and shared Linux cluster into grids Integrating a heterogeneous and shared Linux cluster into grids 1,2 1 1,2 1 V. Büge, U. Felzmann, C. Jung, U. Kerzel, 1 1 1 M. Kreps, G. Quast, A. Vest 1 2 DPG Frühjahrstagung March 28 31, 2006 Dortmund

More information

Global Grid User Support - GGUS - start up schedule

Global Grid User Support - GGUS - start up schedule Global Grid User Support - GGUS - start up schedule GDB Meeting 2004-07 07-13 Concept Target: 24 7 support via time difference and 3 support teams Currently: GGUS FZK GGUS ASCC Planned: GGUS USA Support

More information

The CMS analysis chain in a distributed environment

The CMS analysis chain in a distributed environment The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration

More information

Report from SARA/NIKHEF T1 and associated T2s

Report from SARA/NIKHEF T1 and associated T2s Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch

More information

Mass Storage at GridKa

Mass Storage at GridKa Mass Storage at GridKa Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. Doris Ressmann http://www.gridka.de 1 Overview What is dcache? Pool

More information

Mass Storage System for Disk and Tape resources at the Tier1.

Mass Storage System for Disk and Tape resources at the Tier1. Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage pierpaolo.ricci@cnaf.infn.it ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk

More information

LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch. CERN, July 2007

LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch. CERN, July 2007 WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction dcache Introduction Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. http://www.gridka.de What is dcache? Developed at DESY and FNAL Disk

More information

Internal ROC DECH Report

Internal ROC DECH Report Internal ROC DECH Report Sven Hermann et.al. Karlsruhe Institute of Technology www.eu-egee.org EGEE-III INFSO-RI-222667 EGEE and glite are registered trademarks DECH Region U Dortmund DESY DESY Zeuthen

More information

The dcache Storage Element

The dcache Storage Element 16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG

More information

Computing in High- Energy-Physics: How Virtualization meets the Grid

Computing in High- Energy-Physics: How Virtualization meets the Grid Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered

More information

Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software

Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on

More information

Implementing a Digital Video Archive Based on XenData Software

Implementing a Digital Video Archive Based on XenData Software Based on XenData Software The Video Edition of XenData Archive Series software manages a digital tape library on a Windows Server 2003 platform to create a digital video archive that is ideal for the demanding

More information

Big Data and Storage Management at the Large Hadron Collider

Big Data and Storage Management at the Large Hadron Collider Big Data and Storage Management at the Large Hadron Collider Dirk Duellmann CERN IT, Data & Storage Services Accelerating Science and Innovation CERN was founded 1954: 12 European States Science for Peace!

More information

Dcache Support and Strategy

Dcache Support and Strategy Helmholtz Alliance 2nd Grid Workshop HGF Mass Storage Support Group Christoph Anton Mitterer christoph.anton.mitterer@physik.uni-muenchen.de for the group Group members Filled positions Christopher Jung

More information

High Availability Databases based on Oracle 10g RAC on Linux

High Availability Databases based on Oracle 10g RAC on Linux High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Support in EGEE. (SA1 View) Torsten Antoni GGUS, FZK

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Support in EGEE. (SA1 View) Torsten Antoni GGUS, FZK Support in EGEE (SA1 View) Torsten Antoni GGUS, FZK (torsten.antoni@iwr.fzk.de) with input from LCG Operations Workshop, e-irg e Workshop Why is there need for support? New support groups Network layer

More information

GridKa site report. Manfred Alef, Andreas Heiss, Jos van Wezel. www.kit.edu. Steinbuch Centre for Computing

GridKa site report. Manfred Alef, Andreas Heiss, Jos van Wezel. www.kit.edu. Steinbuch Centre for Computing GridKa site report Manfred Alef, Andreas Heiss, Jos van Wezel Steinbuch Centre for Computing KIT The cooperation of and Universität Karlsruhe (TH) www.kit.edu KIT? SCC? { = University ComputingCentre +

More information

Implementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive

Implementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive Using XenData Software and a Spectra Logic Archive With the Video Edition of XenData Archive Series software on a Windows server and a Spectra Logic T-Series digital archive, broadcast organizations have

More information

(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015

(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 (Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:

More information

The LCG Distributed Database Infrastructure

The LCG Distributed Database Infrastructure The LCG Distributed Database Infrastructure Dirk Düllmann, CERN & LCG 3D DESY Computing Seminar 21. May 07 CERN - IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Outline of the Talk Why databases

More information

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:

More information

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

CMS Tier-3 cluster at NISER. Dr. Tania Moulik CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend

More information

Implementing Offline Digital Video Storage using XenData Software

Implementing Offline Digital Video Storage using XenData Software using XenData Software XenData software manages data tape drives, optionally combined with a tape library, on a Windows Server 2003 platform to create an attractive offline storage solution for professional

More information

Data storage services at CC-IN2P3

Data storage services at CC-IN2P3 Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:

More information

Tier-1 Services for Tier-2 Regional Centres

Tier-1 Services for Tier-2 Regional Centres Tier-1 Services for Tier-2 Regional Centres The LHC Computing MoU is currently being elaborated by a dedicated Task Force. This will cover at least the services that Tier-0 (T0) and Tier-1 centres (T1)

More information

Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL

Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive

More information

Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC

Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC EGEE and glite are registered trademarks Enabling Grids for E-sciencE Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC Elisa Lanciotti, Arnau Bria, Gonzalo

More information

Support Model for SC4 Pilot WLCG Service

Support Model for SC4 Pilot WLCG Service Model for SC4 Pilot WLCG Flavia Donno CERN www.eu-egee.org Problems reporting SC : what s implied? Deployment and configuration, middleware, external components, mass storage support, etc. (from site admins,

More information

irods at CC-IN2P3: managing petabytes of data

irods at CC-IN2P3: managing petabytes of data Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules irods at CC-IN2P3: managing petabytes of data Jean-Yves Nief Pascal Calvat Yonny Cardenas Quentin Le Boulc h

More information

OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend

OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend Hadoop on HEPiX storage test bed at FZK Artem Trunov Karlsruhe Institute of Technology Karlsruhe, Germany KIT The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) www.kit.edu

More information

Grid Computing in Aachen

Grid Computing in Aachen GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for

More information

CHESS DAQ* Introduction

CHESS DAQ* Introduction CHESS DAQ* Introduction Werner Sun (for the CLASSE IT group), Cornell University * DAQ = data acquisition https://en.wikipedia.org/wiki/data_acquisition Big Data @ CHESS Historically, low data volumes:

More information

IT-INFN-CNAF Status Update

IT-INFN-CNAF Status Update IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, 10-11 December 2009 Stefano Zani 10/11/2009 Stefano Zani INFN CNAF (TIER1 Staff) 1 INFN CNAF CNAF is the main computing facility of the INFN Core business:

More information

NT1: An example for future EISCAT_3D data centre and archiving?

NT1: An example for future EISCAT_3D data centre and archiving? March 10, 2015 1 NT1: An example for future EISCAT_3D data centre and archiving? John White NeIC xx March 10, 2015 2 Introduction High Energy Physics and Computing Worldwide LHC Computing Grid Nordic Tier

More information

AFS Usage and Backups using TiBS at Fermilab. Presented by Kevin Hill

AFS Usage and Backups using TiBS at Fermilab. Presented by Kevin Hill AFS Usage and Backups using TiBS at Fermilab Presented by Kevin Hill Agenda History and current usage of AFS at Fermilab About Teradactyl How TiBS (True Incremental Backup System) and TeraMerge works AFS

More information

Status of Grid Activities in Pakistan. FAWAD SAEED National Centre For Physics, Pakistan

Status of Grid Activities in Pakistan. FAWAD SAEED National Centre For Physics, Pakistan Status of Grid Activities in Pakistan FAWAD SAEED National Centre For Physics, Pakistan 1 Introduction of NCP-LCG2 q NCP-LCG2 is the only Tier-2 centre in Pakistan for Worldwide LHC computing Grid (WLCG).

More information

ATLAS GridKa T1/T2 Status

ATLAS GridKa T1/T2 Status ATLAS GridKa T1/T2 Status GridKa TAB, FZK, 19 Oct 2007 München GridKa T1/T2 status Production and data management operations Computing team & cloud organization T1/T2 meeting summary Site monitoring/gangarobot

More information

Software, Computing and Analysis Models at CDF and D0

Software, Computing and Analysis Models at CDF and D0 Software, Computing and Analysis Models at CDF and D0 Donatella Lucchesi CDF experiment INFN-Padova Outline Introduction CDF and D0 Computing Model GRID Migration Summary III Workshop Italiano sulla fisica

More information

File server infrastructure @NIKHEF

File server infrastructure @NIKHEF File server infrastructure @NIKHEF CT system support June 2003 1 CT NIKHEF Outline Protocols Naming scheme (Unix, Windows) Backup and archiving Server systems Disk quota policy AFS June 2003 2 CT NIKHEF

More information

Recovery and Backup TIER 1 Experience, status and questions. RMAN Carlos Fernando Gamboa, BNL Gordon L Brown, RAL Meeting at CNAF June 12-1313 of 2007, Bologna, Italy 1 Table of Content Factors that define

More information

NEXTGEN v5.8 HARDWARE VERIFICATION GUIDE CLIENT HOSTED OR THIRD PARTY SERVERS

NEXTGEN v5.8 HARDWARE VERIFICATION GUIDE CLIENT HOSTED OR THIRD PARTY SERVERS This portion of the survey is for clients who are NOT on TSI Healthcare s ASP and are hosting NG software on their own server. This information must be collected by an IT staff member at your practice.

More information

SPACI & EGEE LCG on IA64

SPACI & EGEE LCG on IA64 SPACI & EGEE LCG on IA64 Dr. Sandro Fiore, University of Lecce and SPACI December 13 th 2005 www.eu-egee.org Outline EGEE Production Grid SPACI Activity Status of the LCG on IA64 SPACI & EGEE Farm Configuration

More information

Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it

Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it 1 In 2001 the project of the Italian Tier1 in Bologna at CNAF was born. First computers were based on Intel Pentium III

More information

KIT Site Report. Andreas Petzold. www.kit.edu STEINBUCH CENTRE FOR COMPUTING - SCC

KIT Site Report. Andreas Petzold. www.kit.edu STEINBUCH CENTRE FOR COMPUTING - SCC KIT Site Report Andreas Petzold STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu GridKa Tier 1 - Batch

More information

Virtual Server and Storage Provisioning Service. Service Description

Virtual Server and Storage Provisioning Service. Service Description RAID Virtual Server and Storage Provisioning Service Service Description November 28, 2008 Computer Services Page 1 TABLE OF CONTENTS INTRODUCTION... 4 VIRTUAL SERVER AND STORAGE PROVISIONING SERVICE OVERVIEW...

More information

DCMS Tier 2/3 prototype infrastructure

DCMS Tier 2/3 prototype infrastructure DCMS Tier 2/3 prototype infrastructure 1 Anja Vest, Uni Karlsruhe DCMS Meeting in Aachen, Overview LCG Queues/mapping set up Hardware capacities Supported software Summary DCMS overview CMS DCMS: Tier

More information

Das HappyFace Meta-Monitoring Framework

Das HappyFace Meta-Monitoring Framework Das HappyFace Meta-Monitoring Framework B. Berge, M. Heinrich, G. Quast, A. Scheurer, M. Zvada, DPG Frühjahrstagung Karlsruhe, 28. März 1. April 2011 KIT University of the State of Baden-Wuerttemberg and

More information

Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative

Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative Jan Wiebelitz Wolfgang Müller, Michael Brenner, Gabriele von Voigt Cracow Grid Workshop 2008, Cracow,

More information

ALICE GRID & Kolkata Tier-2

ALICE GRID & Kolkata Tier-2 ALICE GRID & Kolkata Tier-2 Site Name :- IN-DAE-VECC-01 & IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :- INDIA Vikas Singhal VECC, Kolkata Events at LHC Luminosity : 10 34 cm -2 s -1 40 MHz every

More information

BaBar and ROOT data storage. Peter Elmer BaBar Princeton University ROOT2002 14 Oct. 2002

BaBar and ROOT data storage. Peter Elmer BaBar Princeton University ROOT2002 14 Oct. 2002 BaBar and ROOT data storage Peter Elmer BaBar Princeton University ROOT2002 14 Oct. 2002 The BaBar experiment BaBar is an experiment built primarily to study B physics at an asymmetric high luminosity

More information

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Volker Büge 1, Marcel Kunze 2, OIiver Oberst 1,2, Günter Quast 1, Armin Scheurer 1 1) Institut für Experimentelle

More information

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) ( TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

The LHC Open Network Environment Kars Ohrenberg DESY Computing Seminar Hamburg, 10.12.2012

The LHC Open Network Environment Kars Ohrenberg DESY Computing Seminar Hamburg, 10.12.2012 The LHC Open Network Environment Kars Ohrenberg DESY Computing Seminar Hamburg, 10.12.2012 LHC Computing Infrastructure > WLCG in brief: 1 Tier-0, 11 Tier-1s, ~ 140 Tier-2s, O(300) Tier-3s worldwide Kars

More information

Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary

Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment

More information

The GRID and the Linux Farm at the RCF

The GRID and the Linux Farm at the RCF The GRID and the Linux Farm at the RCF A. Chan, R. Hogue, C. Hollowell, O. Rind, J. Smith, T. Throwe, T. Wlodek, D. Yu Brookhaven National Laboratory, NY 11973, USA The emergence of the GRID architecture

More information

The safer, easier way to help you pass any IT exams. Exam : 000-115. Storage Sales V2. Title : Version : Demo 1 / 5

The safer, easier way to help you pass any IT exams. Exam : 000-115. Storage Sales V2. Title : Version : Demo 1 / 5 Exam : 000-115 Title : Storage Sales V2 Version : Demo 1 / 5 1.The IBM TS7680 ProtecTIER Deduplication Gateway for System z solution is designed to provide all of the following EXCEPT: A. ESCON attach

More information

Low-cost BYO Mass Storage Project. James Cizek Unix Systems Manager Academic Computing and Networking Services

Low-cost BYO Mass Storage Project. James Cizek Unix Systems Manager Academic Computing and Networking Services Low-cost BYO Mass Storage Project James Cizek Unix Systems Manager Academic Computing and Networking Services The Problem Reduced Budget Storage needs growing Storage needs changing (Tiered Storage) I

More information

Ultra-Scalable Storage Provides Low Cost Virtualization Solutions

Ultra-Scalable Storage Provides Low Cost Virtualization Solutions Ultra-Scalable Storage Provides Low Cost Virtualization Solutions Flexible IP NAS/iSCSI System Addresses Current Storage Needs While Offering Future Expansion According to Whatis.com, storage virtualization

More information

Scientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015

Scientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015 Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015 Index - Storage use cases - Bluearc - Lustre - EOS - dcache disk only - dcache+enstore Data distribution by solution

More information

Annex 1: Hardware and Software Details

Annex 1: Hardware and Software Details Annex : Hardware and Software Details Hardware Equipment: The figure below highlights in more details the relation and connectivity between the Portal different environments. The number adjacent to each

More information

Virtualization Infrastructure at Karlsruhe

Virtualization Infrastructure at Karlsruhe Virtualization Infrastructure at Karlsruhe HEPiX Fall 2007 Volker Buege 1),2), Ariel Garcia 1), Marcus Hardt 1), Fabian Kulla 1),Marcel Kunze 1), Oliver Oberst 1),2), Günter Quast 2), Christophe Saout

More information

Service Challenge Tests of the LCG Grid

Service Challenge Tests of the LCG Grid Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources

More information

RO-11-NIPNE, evolution, user support, site and software development. IFIN-HH, DFCTI, LHCb Romanian Team

RO-11-NIPNE, evolution, user support, site and software development. IFIN-HH, DFCTI, LHCb Romanian Team IFIN-HH, DFCTI, LHCb Romanian Team Short overview: The old RO-11-NIPNE site New requirements from the LHCb team User support ( solution offered). Data reprocessing 2012 facts Future plans The old RO-11-NIPNE

More information

MANAGED STORAGE SYSTEMS AT CERN

MANAGED STORAGE SYSTEMS AT CERN MANAGED STORAGE SYSTEMS AT CERN Ingo Augustin and Fabrizio Gagliardi CERN, Geneva, Switzerland 1. INTRODUCTION Abstract The amount of data produced by the future LHC experiments requires fundamental changes

More information

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

WHITEPAPER: Understanding Pillar Axiom Data Protection Options WHITEPAPER: Understanding Pillar Axiom Data Protection Options Introduction This document gives an overview of the Pillar Data System Axiom RAID protection schemas. It does not delve into corner cases

More information

Status and Integration of AP2 Monitoring and Online Steering

Status and Integration of AP2 Monitoring and Online Steering Status and Integration of AP2 Monitoring and Online Steering Daniel Lorenz - University of Siegen Stefan Borovac, Markus Mechtel - University of Wuppertal Ralph Müller-Pfefferkorn Technische Universität

More information

ATLAS Cloud Computing and Computational Science Center at Fresno State

ATLAS Cloud Computing and Computational Science Center at Fresno State ATLAS Cloud Computing and Computational Science Center at Fresno State Cui Lin and (CS/Physics Departments, Fresno State) 2/24/2012 at CSU Chancellor s Office LHC ATLAS Tier 3 at CSUF Tier 1 France ~PByte/sec

More information

Virtualization of a Cluster Batch System

Virtualization of a Cluster Batch System Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational

More information

Solution for private cloud computing

Solution for private cloud computing The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details Use cases By scientist By HEP experiment System requirements and installation How to get it? 2 What

More information

Software Scalability Issues in Large Clusters

Software Scalability Issues in Large Clusters Software Scalability Issues in Large Clusters A. Chan, R. Hogue, C. Hollowell, O. Rind, T. Throwe, T. Wlodek Brookhaven National Laboratory, NY 11973, USA The rapid development of large clusters built

More information

XenData Video Edition. Product Brief:

XenData Video Edition. Product Brief: XenData Video Edition Product Brief: The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on a single Windows 2003 server to create a cost effective digital

More information

HP reference configuration for entry-level SAS Grid Manager solutions

HP reference configuration for entry-level SAS Grid Manager solutions HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

How To Monitor Your Computer With Nagiostee.Org (Nagios)

How To Monitor Your Computer With Nagiostee.Org (Nagios) Host and Service Monitoring at SLAC Alf Wachsmann Stanford Linear Accelerator Center alfw@slac.stanford.edu DESY Zeuthen, May 17, 2005 Monitoring at SLAC Alf Wachsmann 1 Monitoring at SLAC: Does not really

More information

LHCb activities at PIC

LHCb activities at PIC CCRC08 post-mortem LHCb activities at PIC G. Merino PIC, 19/06/2008 LHCb Computing Main user analysis supported at CERN + 6Tier-1s Tier-2s essentially MonteCarlo production facilities 2 CCRC08: Planned

More information

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

NAS or iscsi? White Paper 2007. Selecting a storage system. www.fusionstor.com. Copyright 2007 Fusionstor. No.1

NAS or iscsi? White Paper 2007. Selecting a storage system. www.fusionstor.com. Copyright 2007 Fusionstor. No.1 NAS or iscsi? Selecting a storage system White Paper 2007 Copyright 2007 Fusionstor www.fusionstor.com No.1 2007 Fusionstor Inc.. All rights reserved. Fusionstor is a registered trademark. All brand names

More information

System Requirements Version 8.0 July 25, 2013

System Requirements Version 8.0 July 25, 2013 System Requirements Version 8.0 July 25, 2013 For the most recent version of this document, visit our documentation website. Table of Contents 1 System requirements 3 2 Scalable infrastructure example

More information

AFS Usage and Backups using TiBS at Fermilab. Presented by Kevin Hill

AFS Usage and Backups using TiBS at Fermilab. Presented by Kevin Hill AFS Usage and Backups using TiBS at Fermilab Presented by Kevin Hill Agenda History of AFS at Fermilab Current AFS usage at Fermilab About Teradactyl How TiBS and TeraMerge works AFS History at FERMI Original

More information

Andrew Hanushevsky SLAC National Accelerator Laboratory Stanford University 19-August-2009 Atlas Tier 2/3 Meeting. http://xrootd.slac.stanford.

Andrew Hanushevsky SLAC National Accelerator Laboratory Stanford University 19-August-2009 Atlas Tier 2/3 Meeting. http://xrootd.slac.stanford. Scalla/xrootd Andrew Hanushevsky SLAC National Accelerator Laboratory Stanford University 19-August-2009 Atlas Tier 2/3 Meeting http://xrootd.slac.stanford.edu/ Outline System Component Summary Recent

More information

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY White Paper CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY DVTel Latitude NVMS performance using EMC Isilon storage arrays Correct sizing for storage in a DVTel Latitude physical security

More information

SAN TECHNICAL - DETAILS/ SPECIFICATIONS

SAN TECHNICAL - DETAILS/ SPECIFICATIONS SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance

More information

Preview of a Novel Architecture for Large Scale Storage

Preview of a Novel Architecture for Large Scale Storage Preview of a Novel Architecture for Large Scale Storage Andreas Petzold, Christoph-Erdmann Pfeiler, Jos van Wezel Steinbuch Centre for Computing STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the

More information

DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group

DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group DSS High performance storage pools for LHC Łukasz Janyst on behalf of the CERN IT-DSS group CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Introduction The goal of EOS is to provide a

More information

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed Computing Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies

More information

dcache, a managed storage in grid

dcache, a managed storage in grid dcache, a managed storage in grid support and funding by Patrick for the dcache Team Topics Project Topology Why do we need storage elements in the grid world? The idea behind the LCG (glite) storage element.

More information

Objectivity Data Migration

Objectivity Data Migration Objectivity Data Migration M. Nowak, K. Nienartowicz, A. Valassi, M. Lübeck, D. Geppert CERN, CH-1211 Geneva 23, Switzerland In this article we describe the migration of event data collected by the COMPASS

More information

Storage Virtualization from clusters to grid

Storage Virtualization from clusters to grid Seanodes presents Storage Virtualization from clusters to grid Rennes 4th october 2007 Agenda Seanodes Presentation Overview of storage virtualization in clusters Seanodes cluster virtualization, with

More information

LCG POOL, Distributed Database Deployment and Oracle Services@CERN

LCG POOL, Distributed Database Deployment and Oracle Services@CERN LCG POOL, Distributed Database Deployment and Oracle Services@CERN Dirk Düllmann, D CERN HEPiX Fall 04, BNL Outline: POOL Persistency Framework and its use in LHC Data Challenges LCG 3D Project scope and

More information

Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration

Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration Solutions Integrated Storage Appliances Management Optimized Storage & Migration Archive Data Retention & Compliance Services Global Installation & Support SECURING THE FUTURE OF YOUR DATA w w w.q sta

More information