Iniciativas GRID en la Red Académica Española

Size: px
Start display at page:

Download "Iniciativas GRID en la Red Académica Española"

Transcription

1 Iniciativas GRID en la Red Académica Española GT RedIRIS 2002 Jesús Marco, CSIC

2 Iniciativas GRID Física de Altas Energías: Desafío del próximo acelerador LHC EU-DataGrid (IFAE, testbed) CCLHC-ES LCG (CERN, participación española) LCG-ES DataTag CrossGrid Aplicaciones Interactivas Testbed Empresas EoI 6 PM

3 The Challenge of LHC Computing ATLAS CMS Storage Raw recording rate GBytes/sec Accumulating at 5-8 PetaBytes/year LHCb 10 PetaBytes of disk Processing 200,000 of today s fastest PCs

4 The Challenge of LHC Computing Researchers spread over all the world! Europe: 267 institutes, 4603 users Elsewhere: 208 institutes, 1632 users

5 The DataGRID project Project supported by the EU Fifth Framework programme Principal goal: collaborate with and complement other European and US projects Project objectives: Middleware for fabric & Grid management Large scale testbed Production quality demonstrations Three year phased developments & demos Open source and communication Global GRID Forum Industry and Research Forum Main partners: CERN, INFN(I), CNRS(F), PPARC(UK), NIKHEF(NL),ESA-Earth Observation Other sciences: KNMI(NL), Biology, Medicine Industrial participation: CS SI/F, DataMat/I, IBM/UK Associated partners: Czech Republic, Finland, Germany, Hungary, Spain, Sweden (mostly computer scientists) Industry and Research Project Forum with representatives from: Denmark, Greece, Israel, Japan, Norway, Poland, Portugal, Russia, Switzerland Collaboration with US similar GRID initiatives

6 UI JDL Input Sandbox Replica Catalogue Information Service Job Submit Event Output Sandbox Resource Broker Input Sandbox Logging & Book-keeping keeping Job Submission Service Brokerinfo Storage Element Job Status Output Sandbox Compute Element

7 Spanish Participation in DataGRID WP6 (TESTBED) 2001: IFAE reports in behalf of the other HEP institutions working in the testbed workpackage of the DataGrid project in Spain (IFCA,CIEMAT,UAM,IFIC) Certification Authority Installation kits Information servers (GIIS) Condor batch system and AFS DataGrid project web sites and mailing-lists for Spain Institution Contact Role Funded manpower IFAE A.Pacheco Testbed site Coordination R.Escribá CIEMAT N.Colino Testbed site CMS grid contact F.J.Calonge IFCA R.Marco Testbed site Top GIIS for Spain Certification Authority O.Ponce IFIC J.Salt Testbed site ATLAS grid contact S.González

8 The CERN LHC Computing Grid project After the CERN Hoffmann review (2000): Resource implications presented to LHC experiment RRBs in March CERN Management summary presented to SPC and Committee of Council in March as white paper. Discussions between CERN Management and LHC experiment spokespersons. LHC turn-on schedule agreed machine-experiments CERN/2379 green paper for Council and FC in June Development and Deployment of LHC Computing Grid Infrastructure should be setup and managed as a unified project, similar in some ways to a detector collaboration. CERN is viewed as the institution that should co-ordinate it. There should be a Prototyping Phase in The scale and complexity of the development is large and must be approached using a Project Structure. Work is needed in the Member State Institutes and at CERN. Human and Material Resources for CERN s part of Phase I are not sufficient and should be funded by additional contributions from interested Member States. AGREED! Spanish contribution includes fellowships at CERN

9 Spain, 2001: Acción Especial for Local Infraestructure Objective: Initial seed for LHC Computing at each site: Trained Personnel Startup Hardware Trigger participation in: CERN LHC GRID Computing project (IT&collaborations) Collaboration software GRID projects

10 CG-ES -year project oordinated y Manuel elfino (PIC) EAD = Analysis Fa rm EDS = SW Dev Platform R S G = S W repository GSW = SW Gridification MCF = MC Fabric GVM = VirtualM C farm ETD = Data Transf orm PIC = Gridified Dat a Store SE G = Security Arc hitect CTS = Tech MC Support C D C = Data Chal. Coord. Deliverables to fulfill the EAD, GS W, CDC USC objectives EAD, MCF, GSW, CDC UAM IFCA EAD, MCF, EDS, RS G, SEG, CTS EAD, GV M Stay away from glitz. CIEMAT EAD, ETD, PIC, EDS, GS W IFAE UB EAD,EDS, CT S IFIC EAD, MCF, CTS, CDC Concentrate on deploym ent,mc & analysis Use local Univ.for TT to other disciplines 600 KC HF m aterials contribution to LC G-CE R N

11 The CROSSGRID project European Project (Cross Action CPA9,6th IST call,v PM) [5 M ] Objectives: Extending GRID across Europe: Testbed (WP4) Interactive Applications (WP1) in: Health Care (vascular surgery) Environment (air pollution, meteorology, flooding...) HEP (Interactive Data Analysis) Partners: Poland (CO, M.Turala), Germany (FZK), Holland, Portugal, Greece...(13 countries, 21 institutions) Industry: Datamat (I), Algosystems (Gr) Spain: CSIC (IFCA, IFIC, RedIRIS), UAB, USC/CESGA Participating in : applications (environment,hep), performance and monitoring, resource management, testbed (CSIC WP leader) Started 1 st March 2002 Q1 Deliverables released! (including all SRS, testbed planning)

12 CrossGrid WP1 Task 1.3 Distributed Data Analysis in HEP Coordinated by C.Martinez (CSIC) Subtask 1.3.2: Data-mining techniques on GRID ANN(Artificial Neural Networks) main tool for Data-mining in HEP Example of Physics Analysis using ANN

13 HEP Interactive Application User CAS service Interactive Session Resource Broker 2 3a Replica Manager Portal Authorization DATASET b XML Input XML Output Interactive Interactive Session Interactive Session Interactive Worker Session Interactive Worker Session Worker Session Worker Worker Interactive Session Manager DISTRIBUTED PROCESSING 8 DB Installation Interactive Session Database server

14 Storage Element as WebService? David Rodriguez, CSIC Current SE in EDG: GridFTP server WebService approach: Passive SE : GridFTP, or /grid, etc... Active SE : SQL QUERY (ResultSet in XML)= SELECT FROM (Three tier: servlet running, like Spitfire) ready! (IBM IDS) ROOT query (does this make sense? Paw query does make sense, implemented...) PROCESSING QUERY (= Agent) : Stored Procedure or XML description (SOAP like?) SQL QUERY ok for NN in HEP PROCESSING QUERY (Agent-like approach) needed likely for SOM

15 HEP Interactive Portal V.O.Authentication DATASET Resources Monitoring DATASET Dictionary (Classes): Basic Object Derived Procedures Graphic Output/(Input?) Analysis Scripts Alphanumeric Output Work Persistency

16 Distributed (via MPI) NN training scaling Distributed NN Performance Total Time in seconds Serie1 Potencial (Serie1) events, 16 variables architecture 1000 epochs for training # of computing nodes First checks with nodes at Santander & RedIRIS (Oscar Ponce & Antonio Fuentes): remote configuration: modelling including latency <100 ms needed!

17 S O M Application for DataMining Adaptive Co mpetitive Learning Downscaling Weather Forecasts Sub-grid details scape fro m nu merical models!!!!!

18 AtmosphericPattern Recognition Prototypes for a trained SOM. Close unitsin the lattice are associated with similar atmospheric patterns. T 1000m b T 500m b Z, U, V 500m b

19 CrossGrid Architecture (OGSA in mind) Applications 1.1 BioMed 1.2 Flooding 1.3 Interactive Distributed Data Access 1.3 Data Mining on Grid (NN) 1.4 Meteo Pollution Supporting Tools 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.4 Performance Analysis 3.1 Portal & Migrating Desktop Applications Development Support 1.1 Grid Visualisation Kernel MPICH-G 1.1, 1.2 HLA and others Grid Common Services 3.2 Scheduling Agents DataGrid Job Manager 1.1, User Interaction Distributed Roaming Services Data Collection Access DataGrid Globus 3.4 Replica Replica Optimization of Grid Manager Manager Data Access 3.3 Grid Monitoring GRAM Replica Catalog GSI Globus-IO GIS / MDS GridFTP Local Resources Resource Manager (SE) Secondary Storage Resource Manager (CE) CPU 3.4 Resource Manager 3.4 Optimization of Local Data Access Tertiary Storage 1.1, 1.2 Resource Manager Scientific Instruments (Medical Scaners, Satelites, Radars) 1.1 Resource Manager VR systems (Caves, immerse desks) 1.1 Resource Manager Visualization tools

20 CrossGrid WP4 - International Testbed Organisation Objectives Testing and validation for Applications Programming environment New services & tools Emphasis on collaboration with DATAGRID + extension to DATATAG Extension of GRID across Europe

21 CROSSGRID testbed TCD Dublin PSNC Poznan USC Santiago CSIC IFCA Santander UvA Amsterdam FZK Karlsruhe ICM & IPJ Warsaw CYFRONET Cracow II SAS Bratislava LIP Lisbon CSIC RedIris Madrid UAB Barcelona CSIC IFIC Valencia Auth Thessaloniki DEMO Athens UCY Nikosia

22 CrossGrid WP4 - International Testbed Organisation Tasks in WP4 4.0 Coordination and management IPJ (Warsaw) K.Nawrocki UvA (Amsterdam) D.van Albada (task leader: J.Marco, CSIC, Santander) FZK (Karlsruhe) M.Hardt Coordination with WP1,2,3 IISAS (Bratislava) J.Astalos Collaborative tools (web+videoconf+repository) PSNC(Poznan) P.Wolniewicz Integration Team UCY (Cyprus) G.Tsouloupas 4.1 Testbed setup & incremental evolution (task leader:r.marco, CSIC, Santander) Define installation Deploy testbed releases Certificates Security Working Group A.Fuentes RedIRIS Testbed site responsibles: CYFRONET (Krakow) A.Ozieblo ICM(Warsaw) W.Wislicki TCD (Dublin) B.Coghlan CSIC (Santander/Valencia) J.Sanchez UAB (Barcelona) E.Heymann USC/CESGA (Santiago) C.Fernandez Demo (Athenas) Y.Cotronis AuTh (Thessaloniki) C.Kanellopoulos LIP (Lisbon) J.Martins

23 CrossGrid WP4 - International Testbed Organisation Tasks in WP4 4.2 Integration with DATAGRID (task leader: M.Kunze, FZK) Coordination of testbed setup Exchange knowledge Participate in WP meetings 4.3 Infrastructure Support (task leader: J.Salt, CSIC, Valencia) Fabric management HelpDesk Provide Installation Kit Network support: QoS (working group, I.Lopez CESGA) 4.4 Verification & quality control (task leader: J.Gomes, LIP) Feedback Improve stability of the testbed JOINING DataGrid testbed 1.2 in July 2002

24 ands on IFCA (http://grid.ifca.unican.es/)

25 IFCA Research Institute : University of Cantabria Consejo Superior de Investigaciones Científicas Three main research lines: Astrophysics (XMM, Planck...) Statistical Physics (Lasers, fractals & chaos...) High Energy Physics: DELPHI, LEP (Physics Analysis) CDF, Fermilab (TOF detector & Physics Analysis) CMS, LHC (Alignement & Geant4 Sim, OSCAR) Common Interest: Computing needs: Data Management Advanced Analysis Techniques Optimize resources for infraestructure & manpower

26 HEP Computing at IFCA Previous experience: DELPHI Fast simulation RPC software for DELPHI on-line Analysis software for DELPHI (NN, IES...) Initiatives: Databases (use of O/R DBMS in HEP) FEDER project with DB software company (Semicrol) GRID Initiatives: DataGRID: testbed site & CA for Spain CROSSGRID: WP1 (HEP appl, meteo), WP2, WP4 (testbeds) Technology transfer with companies (Mundivia, CIC) Participation in testbed of DataTag (CDF) Computing for LHC (CMS)

27 GRID team in Santander Research line at IFCA ( Univ.Cantabria + CSIC ) staff + contracts + fellowships Expertise: Databases use Testbed issues (cluster installation, security, CA, etc) Applications: Astrophysics Complex systems HEP Meteo Collaboration and support (via projects) on NN, methods: Dpto Matematicas Clusters & MPI: Grupo de Arquitectura de Computadores Network: Centro de Calculo U.C. Companies: Mundivia CIC-SL Semicrol

28 Resources New IFCA building with support for e-science activities (2002/2003) New Infrastructure: Cluster ~100 IBM servers (100% available for GRID) (dual 1.26 GHz, 640Mb-4GB RAM, 80 GB/server) + 4-way processor gatekeeper Gigabit local backbone Improved network connection: 155 (?) Mbps Santander-RedIRIS (Geant node)

29 72 Computing Elements. Worker Nodes. 8 Storage Elements IBM xseries CPU 1.26 GHz 128Mb+512Mb SDRAM Hard Disk: SCSI 30Gb IDE 60Gb Network: 100 Mbps CDROM, floppy NEXT UPDATES 8 Network Cards 1000Mbps (for Storage Elements,...) Join 1.26GHz CPUs in dual setup Buy new >=1.4GHz CPUs Two machines with 4Gb SDRAM for tests

30 Remote Automatic Installation Nodes configured for PXE Boot Installation Server: DHCP,NFS, TFTP 1 server for LCFG 1 server for PXE-Linux + Kickstart Help sources: PXE-Linux (from SYSLINUX, HOWTO Install RedHat Linux via PXE and Kickstart

31 A new IST Grid project space (Kyriakos Baxevanidis) Applications - Links with European National efforts - Links with US projects (GriPhyN, PPDG, ivdgl, ) GRIA EGSO CROSSGRID GRIP EUROGRID GRIDLAB DATAGRID Middleware & Tools DAMIEN DATATAG Underlying Infrastructures Industry / business Science

32 EoI 6 PM (7 Junio 2002) Proyecto Integrado EGEE (coordinado CERN) CSIC: RedIRIS IFCA (Santander) IFIC (Valencia) IMEDEA (Palma) CAB (Madrid) CNB (Madrid) CBM (?) (Madrid) IAA (Granada) Centros: CIEMAT (Madrid) IFAE (Barcelona) PIC (Barcelona) CESGA (Santiago) IAC (Tenerife) Universidades: U. Cantabria U. Valencia U. Murcia U.A.Barcelona U.A.Madrid U.Complutense Madrid PYMES: CIC-S.L. (Cantabria) GridSystems (Palma)

33 EoI 6 PM (7 Junio 2002) Red de Excelencia RTGRID (Real Time GRIDs) España: CSIC Univ.Cantabria CESGA CIC-SL Polonia Cyfronet Grecia Univ. Athenas Univ. Thessaloniki Slovakia IISAS Bratislava Cyprus Univ. Cyprus Otras propuestas: CEPBA UPV?...

34 In perspective GRIDs will help with: Organizational and large scale issues Metacomputing Web Services are commercial OGSA could be the way if performance is ok Interactive Grid will be hard without QoS on networks Several GRID projects with Spanish participation progressing well Need for organization in Spain: Thematic Network + Teams to organize work e-science centers to get local support, administrative organization, dissemination and exploitation (we need companies involved)

Introduction to Grid computing

Introduction to Grid computing Introduction to Grid computing The INFNGrid Project Team Introduction This tutorial has been implemented considering as starting point the DataGrid (EDG) tutorial Many thanks to the EDG tutorials team!

More information

The CMS analysis chain in a distributed environment

The CMS analysis chain in a distributed environment The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Global Grid User Support - GGUS - within the LCG & EGEE environment

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Global Grid User Support - GGUS - within the LCG & EGEE environment Global Grid User Support - GGUS - within the LCG & EGEE environment Abstract: For very large projects like the LHC Computing Grid Project (LCG) involving some 8,000 scientists from universities and laboratories

More information

MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper

MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper Migrating Desktop and Roaming Access Whitepaper Poznan Supercomputing and Networking Center Noskowskiego 12/14 61-704 Poznan, POLAND 2004, April white-paper-md-ras.doc 1/11 1 Product overview In this whitepaper

More information

Overview of HEP. in Spain

Overview of HEP. in Spain Overview of HEP in Spain Antonio Ferrer (IFIC -- Valencia University; CSIC) Chairman, Particle Physics & Large Accelerators National Program Research Institutions in Spain Ministry of Education Ministry

More information

Big Data and Storage Management at the Large Hadron Collider

Big Data and Storage Management at the Large Hadron Collider Big Data and Storage Management at the Large Hadron Collider Dirk Duellmann CERN IT, Data & Storage Services Accelerating Science and Innovation CERN was founded 1954: 12 European States Science for Peace!

More information

Roberto Barbera. Centralized bookkeeping and monitoring in ALICE

Roberto Barbera. Centralized bookkeeping and monitoring in ALICE Centralized bookkeeping and monitoring in ALICE CHEP INFN 2000, GRID 10.02.2000 WP6, 24.07.2001 Roberto 1 Barbera ALICE and the GRID Phase I: AliRoot production The GRID Powered by ROOT 2 How did we get

More information

The GENIUS Grid Portal

The GENIUS Grid Portal The GENIUS Grid Portal (*) work in collaboration with A. Falzone and A. Rodolico EGEE NA4 Workshop, Paris, 18.12.2003 CHEP 2000, 10.02.2000 Outline Introduction Grid portal architecture and requirements

More information

Tier-1 Services for Tier-2 Regional Centres

Tier-1 Services for Tier-2 Regional Centres Tier-1 Services for Tier-2 Regional Centres The LHC Computing MoU is currently being elaborated by a dedicated Task Force. This will cover at least the services that Tier-0 (T0) and Tier-1 centres (T1)

More information

Global Grid User Support - GGUS - in the LCG & EGEE environment

Global Grid User Support - GGUS - in the LCG & EGEE environment Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni (torsten.antoni@iwr.fzk.de) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New

More information

Software, Computing and Analysis Models at CDF and D0

Software, Computing and Analysis Models at CDF and D0 Software, Computing and Analysis Models at CDF and D0 Donatella Lucchesi CDF experiment INFN-Padova Outline Introduction CDF and D0 Computing Model GRID Migration Summary III Workshop Italiano sulla fisica

More information

EDG Project: Database Management Services

EDG Project: Database Management Services EDG Project: Database Management Services Leanne Guy for the EDG Data Management Work Package EDG::WP2 Leanne.Guy@cern.ch http://cern.ch/leanne 17 April 2002 DAI Workshop Presentation 1 Information in

More information

Integrating a heterogeneous and shared Linux cluster into grids

Integrating a heterogeneous and shared Linux cluster into grids Integrating a heterogeneous and shared Linux cluster into grids 1,2 1 1,2 1 V. Büge, U. Felzmann, C. Jung, U. Kerzel, 1 1 1 M. Kreps, G. Quast, A. Vest 1 2 DPG Frühjahrstagung March 28 31, 2006 Dortmund

More information

Bob Jones Technical Director bob.jones@cern.ch

Bob Jones Technical Director bob.jones@cern.ch Bob Jones Technical Director bob.jones@cern.ch CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST-2003-508833 EGEE Goal & Strategy Goal: Create a wide

More information

The Grid-it: the Italian Grid Production infrastructure

The Grid-it: the Italian Grid Production infrastructure n 1 Maria Cristina Vistoli INFN CNAF, Bologna Italy The Grid-it: the Italian Grid Production infrastructure INFN-Grid goals!promote computational grid technologies research & development: Middleware and

More information

SPACI & EGEE LCG on IA64

SPACI & EGEE LCG on IA64 SPACI & EGEE LCG on IA64 Dr. Sandro Fiore, University of Lecce and SPACI December 13 th 2005 www.eu-egee.org Outline EGEE Production Grid SPACI Activity Status of the LCG on IA64 SPACI & EGEE Farm Configuration

More information

The dcache Storage Element

The dcache Storage Element 16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG

More information

Service Challenge Tests of the LCG Grid

Service Challenge Tests of the LCG Grid Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources

More information

EUFORIA: Grid and High Performance Computing at the Service of Fusion Modelling

EUFORIA: Grid and High Performance Computing at the Service of Fusion Modelling EUFORIA: Grid and High Performance Computing at the Service of Fusion Modelling Miguel Cárdenas-Montes on behalf of Euforia collaboration Ibergrid 2008 May 12 th 2008 Porto Outline Project Objectives Members

More information

Distributed Computing for CEPC. YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep.

Distributed Computing for CEPC. YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep. Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep. 12-13, 2014 1 Outline Introduction Experience of BES-DIRAC Distributed

More information

Cluster, Grid, Cloud Concepts

Cluster, Grid, Cloud Concepts Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of

More information

Vangelis Floros, GRNET S.A. 3 rd Open Source Software Conference March 22, 2008 NTUA, Athens Greece

Vangelis Floros, GRNET S.A. 3 rd Open Source Software Conference March 22, 2008 NTUA, Athens Greece Vangelis Floros, GRNET S.A. 3 rd Open Source Software Conference March 22, 2008 NTUA, Athens Greece Introduction What is a Grid? What is escience? Large Scientific Grids The example of EGEE Building Grid

More information

Report from SARA/NIKHEF T1 and associated T2s

Report from SARA/NIKHEF T1 and associated T2s Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch

More information

Status and Evolution of ATLAS Workload Management System PanDA

Status and Evolution of ATLAS Workload Management System PanDA Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The

More information

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Volker Büge 1, Marcel Kunze 2, OIiver Oberst 1,2, Günter Quast 1, Armin Scheurer 1 1) Institut für Experimentelle

More information

Linux and the Higgs Particle

Linux and the Higgs Particle Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004 Outline What is CERN The Physics The Physics Tools The Accelerator The

More information

Plateforme de Calcul pour les Sciences du Vivant. SRB & glite. V. Breton. http://clrpcsv.in2p3.fr

Plateforme de Calcul pour les Sciences du Vivant. SRB & glite. V. Breton. http://clrpcsv.in2p3.fr SRB & glite V. Breton http://clrpcsv.in2p3.fr Introduction Goal: evaluation of existing technologies for data and tools integration and deployment Data and tools integration should be addressed using web

More information

Computing in High- Energy-Physics: How Virtualization meets the Grid

Computing in High- Energy-Physics: How Virtualization meets the Grid Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered

More information

GridKa: Roles and Status

GridKa: Roles and Status GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre

More information

Instruments in Grid: the New Instrument Element

Instruments in Grid: the New Instrument Element Instruments in Grid: the New Instrument Element C. Vuerli (1,2), G. Taffoni (1,2), I. Coretti (1), F. Pasian (1,2), P. Santin (1), M. Pucillo (1) (1) INAF Astronomical Observatory of Trieste (2) INAF Informative

More information

A demonstration of the use of Datagrid testbed and services for the biomedical community

A demonstration of the use of Datagrid testbed and services for the biomedical community A demonstration of the use of Datagrid testbed and services for the biomedical community Biomedical applications work package V. Breton, Y Legré (CNRS/IN2P3) R. Météry (CS) Credits : C. Blanchet, T. Contamine,

More information

perfsonar deployment over Spanish LHC Tier 2 sites

perfsonar deployment over Spanish LHC Tier 2 sites perfsonar deployment over Spanish LHC Tier 2 sites alberto.escolano@rediris.es www.eu-egee.org EGEE and glite are registered trademarks Agenda Enabling Grids for E-sciencE www.eu-egee.org Introduction

More information

Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware

Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,

More information

Spanish Supercomputing Network

Spanish Supercomputing Network IBERGRID 2008 Spanish Supercomputing Network Francesc Subirada Associate Director Introduction: National Center & Spanish Network The BSC-CNS is the Spanish National Supercomputing Center, created with

More information

EGEE a worldwide Grid infrastructure

EGEE a worldwide Grid infrastructure EGEE a worldwide Grid infrastructure Fabrizio Gagliardi Project Director EGEE CERN, Switzerland IFIC, 6 October 2005 www.eu-egee.org Presentation overview Data intensive science and the rationale for Grid

More information

EGEE is a project funded by the European Union under contract IST-2003-508833

EGEE is a project funded by the European Union under contract IST-2003-508833 www.eu-egee.org NA4 Applications F.Harris(Oxford/CERN) NA4/HEP coordinator EGEE is a project funded by the European Union under contract IST-2003-508833 Talk Outline The basic goals of NA4 The organisation

More information

ATLAS job monitoring in the Dashboard Framework

ATLAS job monitoring in the Dashboard Framework ATLAS job monitoring in the Dashboard Framework J Andreeva 1, S Campana 1, E Karavakis 1, L Kokoszkiewicz 1, P Saiz 1, L Sargsyan 2, J Schovancova 3, D Tuckett 1 on behalf of the ATLAS Collaboration 1

More information

CHAPTER 5 IMPLEMENTATION OF THE PROPOSED GRID NETWORK MONITORING SYSTEM IN CRB

CHAPTER 5 IMPLEMENTATION OF THE PROPOSED GRID NETWORK MONITORING SYSTEM IN CRB 60 CHAPTER 5 IMPLEMENTATION OF THE PROPOSED GRID NETWORK MONITORING SYSTEM IN CRB This chapter discusses the implementation details of the proposed grid network monitoring system, and its integration with

More information

CMS Dashboard of Grid Activity

CMS Dashboard of Grid Activity Enabling Grids for E-sciencE CMS Dashboard of Grid Activity Julia Andreeva, Juha Herrala, CERN LCG ARDA Project, EGEE NA4 EGEE User Forum Geneva, Switzerland March 1-3, 2006 http://arda.cern.ch ARDA and

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Support in EGEE. (SA1 View) Torsten Antoni GGUS, FZK

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Support in EGEE. (SA1 View) Torsten Antoni GGUS, FZK Support in EGEE (SA1 View) Torsten Antoni GGUS, FZK (torsten.antoni@iwr.fzk.de) with input from LCG Operations Workshop, e-irg e Workshop Why is there need for support? New support groups Network layer

More information

Sustainable Grid User Support

Sustainable Grid User Support Sustainable Grid User Support Dr. Torsten Antoni torsten.antoni@kit.edu www.eu-egee.org EGEE and glite are registered trademarks User education User support is Simple access to a broad range of information

More information

PROGRESS Access Environment to Computational Services Performed by Cluster of Sun Systems

PROGRESS Access Environment to Computational Services Performed by Cluster of Sun Systems PROGRESS Access Environment to Computational Services Performed by Cluster of Sun Systems Michał Kosiedowski, Cezary Mazurek, Maciej Stroiński 1) 1) Poznan Supercomputing and Networking Center Noskowskiego

More information

Microsoft Research Worldwide Presence

Microsoft Research Worldwide Presence Microsoft Research Worldwide Presence MSR India MSR New England Redmond Redmond, Washington Sept, 1991 San Francisco, California Jun, 1995 Cambridge, United Kingdom July, 1997 Beijing, China Nov, 1998

More information

Status of Grid Activities in Pakistan. FAWAD SAEED National Centre For Physics, Pakistan

Status of Grid Activities in Pakistan. FAWAD SAEED National Centre For Physics, Pakistan Status of Grid Activities in Pakistan FAWAD SAEED National Centre For Physics, Pakistan 1 Introduction of NCP-LCG2 q NCP-LCG2 is the only Tier-2 centre in Pakistan for Worldwide LHC computing Grid (WLCG).

More information

GRIP:Creating Interoperability between Grids

GRIP:Creating Interoperability between Grids GRIP:Creating Interoperability between Grids Philipp Wieder, Dietmar Erwin, Roger Menday Research Centre Jülich EuroGrid Workshop Cracow, October 29, 2003 Contents Motivation Software Base at a Glance

More information

Data Management System for grid and portal services

Data Management System for grid and portal services Data Management System for grid and portal services Piotr Grzybowski 1, Cezary Mazurek 1, Paweł Spychała 1, Marcin Wolski 1 1 Poznan Supercomputing and Networking Center, ul. Noskowskiego 10, 61-704 Poznan,

More information

The Spanish Distributed TIER-2 for the ATLAS experiment of LHC

The Spanish Distributed TIER-2 for the ATLAS experiment of LHC The Spanish Distributed TIER-2 for the ATLAS experiment of LHC COORDINATED PROJECT ES-ATLAS-T2 logo Universidad Autónoma de Madrid (Madrid) Instituto de Física de Altas Energías (Barcelona) Instituto de

More information

Installing, Running and Maintaining Large Linux Clusters at CERN

Installing, Running and Maintaining Large Linux Clusters at CERN Installing, Running and Maintaining Large Linux Clusters at CERN Vladimir Bahyl, Benjamin Chardi, Jan van Eldik, Ulrich Fuchs, Thorsten Kleinwort, Martin Murth, Tim Smith CERN, European Laboratory for

More information

GEDDM - Commercial Data Mining Using Distributed Resources

GEDDM - Commercial Data Mining Using Distributed Resources GEDDM - Commercial Data Mining Using Distributed Resources Mark Prentice 1 st December 2004 Introduction Industrial partner Overview of GEDDM project Application areas Grid enabled implementation Current

More information

Grid Computing: A Ten Years Look Back. María S. Pérez Facultad de Informática Universidad Politécnica de Madrid mperez@fi.upm.es

Grid Computing: A Ten Years Look Back. María S. Pérez Facultad de Informática Universidad Politécnica de Madrid mperez@fi.upm.es Grid Computing: A Ten Years Look Back María S. Pérez Facultad de Informática Universidad Politécnica de Madrid mperez@fi.upm.es Outline Challenges not yet solved in computing The parents of grid Computing

More information

Status and Integration of AP2 Monitoring and Online Steering

Status and Integration of AP2 Monitoring and Online Steering Status and Integration of AP2 Monitoring and Online Steering Daniel Lorenz - University of Siegen Stefan Borovac, Markus Mechtel - University of Wuppertal Ralph Müller-Pfefferkorn Technische Universität

More information

Concepts and Architecture of Grid Computing. Advanced Topics Spring 2008 Prof. Robert van Engelen

Concepts and Architecture of Grid Computing. Advanced Topics Spring 2008 Prof. Robert van Engelen Concepts and Architecture of Grid Computing Advanced Topics Spring 2008 Prof. Robert van Engelen Overview Grid users: who are they? Concept of the Grid Challenges for the Grid Evolution of Grid systems

More information

Solution for private cloud computing

Solution for private cloud computing The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details Use cases By scientist By HEP experiment System requirements and installation How to get it? 2 What

More information

The GridKa Installation for HEP Computing

The GridKa Installation for HEP Computing The GridKa Installation for HEP Computing Forschungszentrum Karlsruhe GmbH Central Information and Communication Technologies Department Hermann-von-Helmholtz-Platz 1 D-76344 Eggenstein-Leopoldshafen Holger

More information

GRMS - The resource management system for Clusterix computational environment

GRMS - The resource management system for Clusterix computational environment GRMS - The resource management system for Clusterix computational environment Bogdan Ludwiczak bogdanl@man.poznan.pl Poznań Supercomputing and Networking Center Outline: GRMS - what it is? GRMS features

More information

An Experience in Accessing Grid Computing Power from Mobile Device with GridLab Mobile Services

An Experience in Accessing Grid Computing Power from Mobile Device with GridLab Mobile Services An Experience in Accessing Grid Computing Power from Mobile Device with GridLab Mobile Services Abstract In this paper review the notion of the use of mobile device in grid computing environment, We describe

More information

CERN s Scientific Programme and the need for computing resources

CERN s Scientific Programme and the need for computing resources This document produced by Members of the Helix Nebula consortium is licensed under a Creative Commons Attribution 3.0 Unported License. Permissions beyond the scope of this license may be available at

More information

Network monitoring in DataGRID project

Network monitoring in DataGRID project Network monitoring in DataGRID project Franck Bonnassieux (CNRS) franck.bonnassieux@ens-lyon.fr 1st SCAMPI Workshop 27 Jan. 2003 DataGRID Network Monitoring Outline DataGRID network Specificity of Grid

More information

Grid e-services for Multi-Layer SOM Neural Network Simulation

Grid e-services for Multi-Layer SOM Neural Network Simulation Grid e-services for Multi-Layer SOM Neural Network Simulation,, Rui Silva Faculdade de Engenharia 4760-108 V. N. Famalicão, Portugal {rml,rsilva}@fam.ulusiada.pt 2007 Outline Overview Multi-Layer SOM Background

More information

LHC GRID computing in Poland

LHC GRID computing in Poland POLAND LHC GRID computing in Poland Michał Turała IFJ PAN/ ACK Cyfronet AGH, Kraków Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa, 25.10.2007 21.04.2008 1 Outline Computing needs

More information

Recent grid activities at INFN Catania(*) Roberto Barbera

Recent grid activities at INFN Catania(*) Roberto Barbera RecentgridactivitiesatINFNCatania(*) RobertoBarbera workincollaborationwithnicesrl (*) HEPiX/HEPNT2002,Catania,18.04.2002 CHEP2000,10.02.2000 1 RobertoBarbera DipartimentodiFisicadell UniversitàdiCataniaandINFNCatania

More information

An approach to grid scheduling by using Condor-G Matchmaking mechanism

An approach to grid scheduling by using Condor-G Matchmaking mechanism An approach to grid scheduling by using Condor-G Matchmaking mechanism E. Imamagic, B. Radic, D. Dobrenic University Computing Centre, University of Zagreb, Croatia {emir.imamagic, branimir.radic, dobrisa.dobrenic}@srce.hr

More information

CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006

CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 1 Introduction Different h/w used for GRID services Various techniques & First

More information

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:

More information

Laboratório de Instrumentação e Física Experimental de Partículas Lisboa, Portugal e-mail: david@lip.pt

Laboratório de Instrumentação e Física Experimental de Partículas Lisboa, Portugal e-mail: david@lip.pt Computing and Informatics, Vol. 31, 2012, 135 148 SOFTWARE PROVISION PROCESS FOR EGI Mário David, Gonçalo Borges, Jorge Gomes, João Pina Laboratório de Instrumentação e Física Experimental de Partículas

More information

DAME Astrophysical DAta Mining Mining & & Exploration Exploration GRID

DAME Astrophysical DAta Mining Mining & & Exploration Exploration GRID DAME Astrophysical DAta Mining & Exploration on GRID M. Brescia S. G. Djorgovski G. Longo & DAME Working Group Istituto Nazionale di Astrofisica Astronomical Observatory of Capodimonte, Napoli Department

More information

Grid Computing in Aachen

Grid Computing in Aachen GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for

More information

Building a Private Cloud with Eucalyptus

Building a Private Cloud with Eucalyptus Building a Private Cloud with Eucalyptus 5th IEEE International Conference on e-science Oxford December 9th 2009 Christian Baun, Marcel Kunze KIT The cooperation of Forschungszentrum Karlsruhe GmbH und

More information

Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC

Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC EGEE and glite are registered trademarks Enabling Grids for E-sciencE Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC Elisa Lanciotti, Arnau Bria, Gonzalo

More information

Monitoring Message Passing Applications in the Grid

Monitoring Message Passing Applications in the Grid Monitoring Message Passing Applications in the Grid with GRM and R-GMA Norbert Podhorszki and Peter Kacsuk MTA SZTAKI, Budapest, H-1528 P.O.Box 63, Hungary pnorbert@sztaki.hu, kacsuk@sztaki.hu Abstract.

More information

Understanding ArcGIS in Virtualization and Cloud Environments

Understanding ArcGIS in Virtualization and Cloud Environments Esri Middle East and Africa User Conference December 10 12 Abu Dhabi, UAE Understanding ArcGIS in Virtualization and Cloud Environments Marwa Mabrouk Powerful GIS capabilities Delivered as Web services

More information

Virtualization Infrastructure at Karlsruhe

Virtualization Infrastructure at Karlsruhe Virtualization Infrastructure at Karlsruhe HEPiX Fall 2007 Volker Buege 1),2), Ariel Garcia 1), Marcus Hardt 1), Fabian Kulla 1),Marcel Kunze 1), Oliver Oberst 1),2), Günter Quast 2), Christophe Saout

More information

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH

More information

Grid Computing With FreeBSD

Grid Computing With FreeBSD Grid Computing With FreeBSD USENIX ATC '04: UseBSD SIG Boston, MA, June 29 th 2004 Brooks Davis, Craig Lee The Aerospace Corporation El Segundo, CA {brooks,lee}aero.org http://people.freebsd.org/~brooks/papers/usebsd2004/

More information

STW Open Technology Programme. H2020 Future & Emerging Technology. and. GRANTS WEEK 2015 October 9 th

STW Open Technology Programme. H2020 Future & Emerging Technology. and. GRANTS WEEK 2015 October 9 th STW Open Technology Programme and H2020 Future & Emerging Technology GRANTS WEEK 2015 October 9 th 9/12/2010 INDIVIDUAL FUNDING OPPORTUNITIES IN EUROPE 1 SUPPORT FOR RESEARCH FUNDING ACQUISITION AT THE

More information

Configuration Management of Massively Scalable Systems

Configuration Management of Massively Scalable Systems 1 KKIO 2005 Configuration Management of Massively Scalable Systems Configuration Management of Massively Scalable Systems Marcin Jarząb, Krzysztof Zieliński, Jacek Kosiński SUN Center of Excelence Department

More information

E-mail: guido.negri@cern.ch, shank@bu.edu, dario.barberis@cern.ch, kors.bos@cern.ch, alexei.klimentov@cern.ch, massimo.lamanna@cern.

E-mail: guido.negri@cern.ch, shank@bu.edu, dario.barberis@cern.ch, kors.bos@cern.ch, alexei.klimentov@cern.ch, massimo.lamanna@cern. *a, J. Shank b, D. Barberis c, K. Bos d, A. Klimentov e and M. Lamanna a a CERN Switzerland b Boston University c Università & INFN Genova d NIKHEF Amsterdam e BNL Brookhaven National Laboratories E-mail:

More information

Experiences with the GLUE information schema in the LCG/EGEE production Grid

Experiences with the GLUE information schema in the LCG/EGEE production Grid Experiences with the GLUE information schema in the LCG/EGEE production Grid Stephen Burke, Sergio Andreozzi and Laurence Field CHEP07, Victoria, Canada www.eu-egee.org EGEE and glite are registered trademarks

More information

LCG POOL, Distributed Database Deployment and Oracle Services@CERN

LCG POOL, Distributed Database Deployment and Oracle Services@CERN LCG POOL, Distributed Database Deployment and Oracle Services@CERN Dirk Düllmann, D CERN HEPiX Fall 04, BNL Outline: POOL Persistency Framework and its use in LHC Data Challenges LCG 3D Project scope and

More information

Cluster Computing at HRI

Cluster Computing at HRI Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: jasjeet@mri.ernet.in 1 Introduction and some local history High performance computing

More information

Big Data in BioMedical Sciences. Steven Newhouse, Head of Technical Services, EMBL-EBI

Big Data in BioMedical Sciences. Steven Newhouse, Head of Technical Services, EMBL-EBI Big Data in BioMedical Sciences Steven Newhouse, Head of Technical Services, EMBL-EBI Big Data for BioMedical Sciences EMBL-EBI: What we do and why? Challenges & Opportunities Infrastructure Requirements

More information

Poland. networking, digital divide andgridprojects. M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland

Poland. networking, digital divide andgridprojects. M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland Poland networking, digital divide andgridprojects M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland M. Turala The Henryk Niewodniczanski Instytut of Nuclear Physics PAN and ACK

More information

FermiGrid Highly Available Grid Services

FermiGrid Highly Available Grid Services FermiGrid Highly Available Grid Services Eileen Berman, Keith Chadwick Fermilab Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359. Outline FermiGrid - Architecture &

More information

CMS: Challenges in Advanced Computing Techniques (Big Data, Data reduction, Data Analytics)

CMS: Challenges in Advanced Computing Techniques (Big Data, Data reduction, Data Analytics) CMS: Challenges in Advanced Computing Techniques (Big Data, Data reduction, Data Analytics) With input from: Daniele Bonacorsi, Ian Fisk, Valentin Kuznetsov, David Lange Oliver Gutsche CERN openlab technical

More information

INFN Testbed status report

INFN Testbed status report L. Gaido Oxford July, 2-5 2001 1 Dedicated resources (available now) Quantum Grid: 3-4 PCs in 15 sites: Bari, Bologna, Cagliari, Catania, Cnaf, Ferrara, Lecce, Legnaro, Milano, Napoli, Padova, Parma, Pisa,

More information

Deploying Business Virtual Appliances on Open Source Cloud Computing

Deploying Business Virtual Appliances on Open Source Cloud Computing International Journal of Computer Science and Telecommunications [Volume 3, Issue 4, April 2012] 26 ISSN 2047-3338 Deploying Business Virtual Appliances on Open Source Cloud Computing Tran Van Lang 1 and

More information

Enabling multi-cloud resources at CERN within the Helix Nebula project. D. Giordano (CERN IT-SDC) HEPiX Spring 2014 Workshop 23 May 2014

Enabling multi-cloud resources at CERN within the Helix Nebula project. D. Giordano (CERN IT-SDC) HEPiX Spring 2014 Workshop 23 May 2014 Enabling multi-cloud resources at CERN within the Helix Nebula project D. Giordano (CERN IT-) HEPiX Spring 2014 Workshop This document produced by Members of the Helix Nebula consortium is licensed under

More information

Jan Astalos Department of Parallel and Distributed Computing Institute of Informatics Slovak Academy of Sciences http://www.ui.sav.

Jan Astalos Department of Parallel and Distributed Computing Institute of Informatics Slovak Academy of Sciences http://www.ui.sav. IISAS Certification Authority Jan Astalos Department of Parallel and Distributed Computing Institute of Informatics Slovak Academy of Sciences http://www.ui.sav.sk IISAS and CrossGrid Grid application

More information

PRACE An Introduction Tim Stitt PhD. CSCS, Switzerland

PRACE An Introduction Tim Stitt PhD. CSCS, Switzerland PRACE An Introduction Tim Stitt PhD. CSCS, Switzerland High Performance Computing A Key Technology 1. Supercomputing is the tool for solving the most challenging problems through simulations; 2. Access

More information

CNR-INFM DEMOCRITOS and SISSA elab Trieste

CNR-INFM DEMOCRITOS and SISSA elab Trieste elab and the FVG grid Stefano Cozzini CNR-INFM DEMOCRITOS and SISSA elab Trieste Agenda/Aims Present elab ant its computational infrastructure GRID-FVG structure basic requirements technical choices open

More information

Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF)

Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF) Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF) Gerardo Ganis CERN E-mail: Gerardo.Ganis@cern.ch CERN Institute of Informatics, University of Warsaw E-mail: Jan.Iwaszkiewicz@cern.ch

More information

What is this Thing Called the Grid?

What is this Thing Called the Grid? What is this Thing Called the Grid? Or: Is the Emperor Naked? Leif Nixon 2 december 2003 Please, interrupt whenever you have questions. These slides will be available from http://www.nsc.liu.se/~nixon/

More information

Log managing at PIC. A. Bruno Rodríguez Rodríguez. Port d informació científica Campus UAB, Bellaterra Barcelona. December 3, 2013

Log managing at PIC. A. Bruno Rodríguez Rodríguez. Port d informació científica Campus UAB, Bellaterra Barcelona. December 3, 2013 Log managing at PIC A. Bruno Rodríguez Rodríguez Port d informació científica Campus UAB, Bellaterra Barcelona December 3, 2013 Bruno Rodríguez (PIC) Log managing at PIC December 3, 2013 1 / 21 What will

More information

A quantitative comparison between xen and kvm

A quantitative comparison between xen and kvm Home Search Collections Journals About Contact us My IOPscience A quantitative comparison between xen and kvm This content has been downloaded from IOPscience. Please scroll down to see the full text.

More information

Virtualization of a Cluster Batch System

Virtualization of a Cluster Batch System Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational

More information

System Requirements Table of contents

System Requirements Table of contents Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5

More information

Database Services for Physics @ CERN

Database Services for Physics @ CERN Database Services for Physics @ CERN Deployment and Monitoring Radovan Chytracek CERN IT Department Outline Database services for physics Status today How we do the services tomorrow? Performance tuning

More information

Deploying a distributed data storage system on the UK National Grid Service using federated SRB

Deploying a distributed data storage system on the UK National Grid Service using federated SRB Deploying a distributed data storage system on the UK National Grid Service using federated SRB Manandhar A.S., Kleese K., Berrisford P., Brown G.D. CCLRC e-science Center Abstract As Grid enabled applications

More information

Grid Activities in Poland

Grid Activities in Poland Grid Activities in Poland Jarek Nabrzyski Poznan Supercomputing and Networking Center naber@man.poznan.pl Outline PSNC National Program PIONIER Sample projects: Progress and Clusterix R&D Center PSNC was

More information