Iniciativas GRID en la Red Académica Española
|
|
|
- Dale Taylor
- 10 years ago
- Views:
Transcription
1 Iniciativas GRID en la Red Académica Española GT RedIRIS 2002 Jesús Marco, CSIC
2 Iniciativas GRID Física de Altas Energías: Desafío del próximo acelerador LHC EU-DataGrid (IFAE, testbed) CCLHC-ES LCG (CERN, participación española) LCG-ES DataTag CrossGrid Aplicaciones Interactivas Testbed Empresas EoI 6 PM
3 The Challenge of LHC Computing ATLAS CMS Storage Raw recording rate GBytes/sec Accumulating at 5-8 PetaBytes/year LHCb 10 PetaBytes of disk Processing 200,000 of today s fastest PCs
4 The Challenge of LHC Computing Researchers spread over all the world! Europe: 267 institutes, 4603 users Elsewhere: 208 institutes, 1632 users
5 The DataGRID project Project supported by the EU Fifth Framework programme Principal goal: collaborate with and complement other European and US projects Project objectives: Middleware for fabric & Grid management Large scale testbed Production quality demonstrations Three year phased developments & demos Open source and communication Global GRID Forum Industry and Research Forum Main partners: CERN, INFN(I), CNRS(F), PPARC(UK), NIKHEF(NL),ESA-Earth Observation Other sciences: KNMI(NL), Biology, Medicine Industrial participation: CS SI/F, DataMat/I, IBM/UK Associated partners: Czech Republic, Finland, Germany, Hungary, Spain, Sweden (mostly computer scientists) Industry and Research Project Forum with representatives from: Denmark, Greece, Israel, Japan, Norway, Poland, Portugal, Russia, Switzerland Collaboration with US similar GRID initiatives
6 UI JDL Input Sandbox Replica Catalogue Information Service Job Submit Event Output Sandbox Resource Broker Input Sandbox Logging & Book-keeping keeping Job Submission Service Brokerinfo Storage Element Job Status Output Sandbox Compute Element
7 Spanish Participation in DataGRID WP6 (TESTBED) 2001: IFAE reports in behalf of the other HEP institutions working in the testbed workpackage of the DataGrid project in Spain (IFCA,CIEMAT,UAM,IFIC) Certification Authority Installation kits Information servers (GIIS) Condor batch system and AFS DataGrid project web sites and mailing-lists for Spain Institution Contact Role Funded manpower IFAE A.Pacheco Testbed site Coordination R.Escribá CIEMAT N.Colino Testbed site CMS grid contact F.J.Calonge IFCA R.Marco Testbed site Top GIIS for Spain Certification Authority O.Ponce IFIC J.Salt Testbed site ATLAS grid contact S.González
8 The CERN LHC Computing Grid project After the CERN Hoffmann review (2000): Resource implications presented to LHC experiment RRBs in March CERN Management summary presented to SPC and Committee of Council in March as white paper. Discussions between CERN Management and LHC experiment spokespersons. LHC turn-on schedule agreed machine-experiments CERN/2379 green paper for Council and FC in June Development and Deployment of LHC Computing Grid Infrastructure should be setup and managed as a unified project, similar in some ways to a detector collaboration. CERN is viewed as the institution that should co-ordinate it. There should be a Prototyping Phase in The scale and complexity of the development is large and must be approached using a Project Structure. Work is needed in the Member State Institutes and at CERN. Human and Material Resources for CERN s part of Phase I are not sufficient and should be funded by additional contributions from interested Member States. AGREED! Spanish contribution includes fellowships at CERN
9 Spain, 2001: Acción Especial for Local Infraestructure Objective: Initial seed for LHC Computing at each site: Trained Personnel Startup Hardware Trigger participation in: CERN LHC GRID Computing project (IT&collaborations) Collaboration software GRID projects
10 CG-ES -year project oordinated y Manuel elfino (PIC) EAD = Analysis Fa rm EDS = SW Dev Platform R S G = S W repository GSW = SW Gridification MCF = MC Fabric GVM = VirtualM C farm ETD = Data Transf orm PIC = Gridified Dat a Store SE G = Security Arc hitect CTS = Tech MC Support C D C = Data Chal. Coord. Deliverables to fulfill the EAD, GS W, CDC USC objectives EAD, MCF, GSW, CDC UAM IFCA EAD, MCF, EDS, RS G, SEG, CTS EAD, GV M Stay away from glitz. CIEMAT EAD, ETD, PIC, EDS, GS W IFAE UB EAD,EDS, CT S IFIC EAD, MCF, CTS, CDC Concentrate on deploym ent,mc & analysis Use local Univ.for TT to other disciplines 600 KC HF m aterials contribution to LC G-CE R N
11 The CROSSGRID project European Project (Cross Action CPA9,6th IST call,v PM) [5 M ] Objectives: Extending GRID across Europe: Testbed (WP4) Interactive Applications (WP1) in: Health Care (vascular surgery) Environment (air pollution, meteorology, flooding...) HEP (Interactive Data Analysis) Partners: Poland (CO, M.Turala), Germany (FZK), Holland, Portugal, Greece...(13 countries, 21 institutions) Industry: Datamat (I), Algosystems (Gr) Spain: CSIC (IFCA, IFIC, RedIRIS), UAB, USC/CESGA Participating in : applications (environment,hep), performance and monitoring, resource management, testbed (CSIC WP leader) Started 1 st March 2002 Q1 Deliverables released! (including all SRS, testbed planning)
12 CrossGrid WP1 Task 1.3 Distributed Data Analysis in HEP Coordinated by C.Martinez (CSIC) Subtask 1.3.2: Data-mining techniques on GRID ANN(Artificial Neural Networks) main tool for Data-mining in HEP Example of Physics Analysis using ANN
13 HEP Interactive Application User CAS service Interactive Session Resource Broker 2 3a Replica Manager Portal Authorization DATASET b XML Input XML Output Interactive Interactive Session Interactive Session Interactive Worker Session Interactive Worker Session Worker Session Worker Worker Interactive Session Manager DISTRIBUTED PROCESSING 8 DB Installation Interactive Session Database server
14 Storage Element as WebService? David Rodriguez, CSIC Current SE in EDG: GridFTP server WebService approach: Passive SE : GridFTP, or /grid, etc... Active SE : SQL QUERY (ResultSet in XML)= SELECT FROM (Three tier: servlet running, like Spitfire) ready! (IBM IDS) ROOT query (does this make sense? Paw query does make sense, implemented...) PROCESSING QUERY (= Agent) : Stored Procedure or XML description (SOAP like?) SQL QUERY ok for NN in HEP PROCESSING QUERY (Agent-like approach) needed likely for SOM
15 HEP Interactive Portal V.O.Authentication DATASET Resources Monitoring DATASET Dictionary (Classes): Basic Object Derived Procedures Graphic Output/(Input?) Analysis Scripts Alphanumeric Output Work Persistency
16 Distributed (via MPI) NN training scaling Distributed NN Performance Total Time in seconds Serie1 Potencial (Serie1) events, 16 variables architecture 1000 epochs for training # of computing nodes First checks with nodes at Santander & RedIRIS (Oscar Ponce & Antonio Fuentes): remote configuration: modelling including latency <100 ms needed!
17 S O M Application for DataMining Adaptive Co mpetitive Learning Downscaling Weather Forecasts Sub-grid details scape fro m nu merical models!!!!!
18 AtmosphericPattern Recognition Prototypes for a trained SOM. Close unitsin the lattice are associated with similar atmospheric patterns. T 1000m b T 500m b Z, U, V 500m b
19 CrossGrid Architecture (OGSA in mind) Applications 1.1 BioMed 1.2 Flooding 1.3 Interactive Distributed Data Access 1.3 Data Mining on Grid (NN) 1.4 Meteo Pollution Supporting Tools 2.2 MPI Verification 2.3 Metrics and Benchmarks 2.4 Performance Analysis 3.1 Portal & Migrating Desktop Applications Development Support 1.1 Grid Visualisation Kernel MPICH-G 1.1, 1.2 HLA and others Grid Common Services 3.2 Scheduling Agents DataGrid Job Manager 1.1, User Interaction Distributed Roaming Services Data Collection Access DataGrid Globus 3.4 Replica Replica Optimization of Grid Manager Manager Data Access 3.3 Grid Monitoring GRAM Replica Catalog GSI Globus-IO GIS / MDS GridFTP Local Resources Resource Manager (SE) Secondary Storage Resource Manager (CE) CPU 3.4 Resource Manager 3.4 Optimization of Local Data Access Tertiary Storage 1.1, 1.2 Resource Manager Scientific Instruments (Medical Scaners, Satelites, Radars) 1.1 Resource Manager VR systems (Caves, immerse desks) 1.1 Resource Manager Visualization tools
20 CrossGrid WP4 - International Testbed Organisation Objectives Testing and validation for Applications Programming environment New services & tools Emphasis on collaboration with DATAGRID + extension to DATATAG Extension of GRID across Europe
21 CROSSGRID testbed TCD Dublin PSNC Poznan USC Santiago CSIC IFCA Santander UvA Amsterdam FZK Karlsruhe ICM & IPJ Warsaw CYFRONET Cracow II SAS Bratislava LIP Lisbon CSIC RedIris Madrid UAB Barcelona CSIC IFIC Valencia Auth Thessaloniki DEMO Athens UCY Nikosia
22 CrossGrid WP4 - International Testbed Organisation Tasks in WP4 4.0 Coordination and management IPJ (Warsaw) K.Nawrocki UvA (Amsterdam) D.van Albada (task leader: J.Marco, CSIC, Santander) FZK (Karlsruhe) M.Hardt Coordination with WP1,2,3 IISAS (Bratislava) J.Astalos Collaborative tools (web+videoconf+repository) PSNC(Poznan) P.Wolniewicz Integration Team UCY (Cyprus) G.Tsouloupas 4.1 Testbed setup & incremental evolution (task leader:r.marco, CSIC, Santander) Define installation Deploy testbed releases Certificates Security Working Group A.Fuentes RedIRIS Testbed site responsibles: CYFRONET (Krakow) A.Ozieblo ICM(Warsaw) W.Wislicki TCD (Dublin) B.Coghlan CSIC (Santander/Valencia) J.Sanchez UAB (Barcelona) E.Heymann USC/CESGA (Santiago) C.Fernandez Demo (Athenas) Y.Cotronis AuTh (Thessaloniki) C.Kanellopoulos LIP (Lisbon) J.Martins
23 CrossGrid WP4 - International Testbed Organisation Tasks in WP4 4.2 Integration with DATAGRID (task leader: M.Kunze, FZK) Coordination of testbed setup Exchange knowledge Participate in WP meetings 4.3 Infrastructure Support (task leader: J.Salt, CSIC, Valencia) Fabric management HelpDesk Provide Installation Kit Network support: QoS (working group, I.Lopez CESGA) 4.4 Verification & quality control (task leader: J.Gomes, LIP) Feedback Improve stability of the testbed JOINING DataGrid testbed 1.2 in July 2002
24 ands on IFCA (
25 IFCA Research Institute : University of Cantabria Consejo Superior de Investigaciones Científicas Three main research lines: Astrophysics (XMM, Planck...) Statistical Physics (Lasers, fractals & chaos...) High Energy Physics: DELPHI, LEP (Physics Analysis) CDF, Fermilab (TOF detector & Physics Analysis) CMS, LHC (Alignement & Geant4 Sim, OSCAR) Common Interest: Computing needs: Data Management Advanced Analysis Techniques Optimize resources for infraestructure & manpower
26 HEP Computing at IFCA Previous experience: DELPHI Fast simulation RPC software for DELPHI on-line Analysis software for DELPHI (NN, IES...) Initiatives: Databases (use of O/R DBMS in HEP) FEDER project with DB software company (Semicrol) GRID Initiatives: DataGRID: testbed site & CA for Spain CROSSGRID: WP1 (HEP appl, meteo), WP2, WP4 (testbeds) Technology transfer with companies (Mundivia, CIC) Participation in testbed of DataTag (CDF) Computing for LHC (CMS)
27 GRID team in Santander Research line at IFCA ( Univ.Cantabria + CSIC ) staff + contracts + fellowships Expertise: Databases use Testbed issues (cluster installation, security, CA, etc) Applications: Astrophysics Complex systems HEP Meteo Collaboration and support (via projects) on NN, methods: Dpto Matematicas Clusters & MPI: Grupo de Arquitectura de Computadores Network: Centro de Calculo U.C. Companies: Mundivia CIC-SL Semicrol
28 Resources New IFCA building with support for e-science activities (2002/2003) New Infrastructure: Cluster ~100 IBM servers (100% available for GRID) (dual 1.26 GHz, 640Mb-4GB RAM, 80 GB/server) + 4-way processor gatekeeper Gigabit local backbone Improved network connection: 155 (?) Mbps Santander-RedIRIS (Geant node)
29 72 Computing Elements. Worker Nodes. 8 Storage Elements IBM xseries CPU 1.26 GHz 128Mb+512Mb SDRAM Hard Disk: SCSI 30Gb IDE 60Gb Network: 100 Mbps CDROM, floppy NEXT UPDATES 8 Network Cards 1000Mbps (for Storage Elements,...) Join 1.26GHz CPUs in dual setup Buy new >=1.4GHz CPUs Two machines with 4Gb SDRAM for tests
30 Remote Automatic Installation Nodes configured for PXE Boot Installation Server: DHCP,NFS, TFTP 1 server for LCFG 1 server for PXE-Linux + Kickstart Help sources: PXE-Linux (from SYSLINUX, HOWTO Install RedHat Linux via PXE and Kickstart
31 A new IST Grid project space (Kyriakos Baxevanidis) Applications - Links with European National efforts - Links with US projects (GriPhyN, PPDG, ivdgl, ) GRIA EGSO CROSSGRID GRIP EUROGRID GRIDLAB DATAGRID Middleware & Tools DAMIEN DATATAG Underlying Infrastructures Industry / business Science
32 EoI 6 PM (7 Junio 2002) Proyecto Integrado EGEE (coordinado CERN) CSIC: RedIRIS IFCA (Santander) IFIC (Valencia) IMEDEA (Palma) CAB (Madrid) CNB (Madrid) CBM (?) (Madrid) IAA (Granada) Centros: CIEMAT (Madrid) IFAE (Barcelona) PIC (Barcelona) CESGA (Santiago) IAC (Tenerife) Universidades: U. Cantabria U. Valencia U. Murcia U.A.Barcelona U.A.Madrid U.Complutense Madrid PYMES: CIC-S.L. (Cantabria) GridSystems (Palma)
33 EoI 6 PM (7 Junio 2002) Red de Excelencia RTGRID (Real Time GRIDs) España: CSIC Univ.Cantabria CESGA CIC-SL Polonia Cyfronet Grecia Univ. Athenas Univ. Thessaloniki Slovakia IISAS Bratislava Cyprus Univ. Cyprus Otras propuestas: CEPBA UPV?...
34 In perspective GRIDs will help with: Organizational and large scale issues Metacomputing Web Services are commercial OGSA could be the way if performance is ok Interactive Grid will be hard without QoS on networks Several GRID projects with Spanish participation progressing well Need for organization in Spain: Thematic Network + Teams to organize work e-science centers to get local support, administrative organization, dissemination and exploitation (we need companies involved)
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper
Migrating Desktop and Roaming Access Whitepaper Poznan Supercomputing and Networking Center Noskowskiego 12/14 61-704 Poznan, POLAND 2004, April white-paper-md-ras.doc 1/11 1 Product overview In this whitepaper
Overview of HEP. in Spain
Overview of HEP in Spain Antonio Ferrer (IFIC -- Valencia University; CSIC) Chairman, Particle Physics & Large Accelerators National Program Research Institutions in Spain Ministry of Education Ministry
Roberto Barbera. Centralized bookkeeping and monitoring in ALICE
Centralized bookkeeping and monitoring in ALICE CHEP INFN 2000, GRID 10.02.2000 WP6, 24.07.2001 Roberto 1 Barbera ALICE and the GRID Phase I: AliRoot production The GRID Powered by ROOT 2 How did we get
The GENIUS Grid Portal
The GENIUS Grid Portal (*) work in collaboration with A. Falzone and A. Rodolico EGEE NA4 Workshop, Paris, 18.12.2003 CHEP 2000, 10.02.2000 Outline Introduction Grid portal architecture and requirements
Global Grid User Support - GGUS - in the LCG & EGEE environment
Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni ([email protected]) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New
EDG Project: Database Management Services
EDG Project: Database Management Services Leanne Guy for the EDG Data Management Work Package EDG::WP2 [email protected] http://cern.ch/leanne 17 April 2002 DAI Workshop Presentation 1 Information in
Bob Jones Technical Director [email protected]
Bob Jones Technical Director [email protected] CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST-2003-508833 EGEE Goal & Strategy Goal: Create a wide
The Grid-it: the Italian Grid Production infrastructure
n 1 Maria Cristina Vistoli INFN CNAF, Bologna Italy The Grid-it: the Italian Grid Production infrastructure INFN-Grid goals!promote computational grid technologies research & development: Middleware and
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
Service Challenge Tests of the LCG Grid
Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources
Cluster, Grid, Cloud Concepts
Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
Status and Evolution of ATLAS Workload Management System PanDA
Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The
Linux and the Higgs Particle
Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004 Outline What is CERN The Physics The Physics Tools The Accelerator The
Computing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
GridKa: Roles and Status
GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre
Instruments in Grid: the New Instrument Element
Instruments in Grid: the New Instrument Element C. Vuerli (1,2), G. Taffoni (1,2), I. Coretti (1), F. Pasian (1,2), P. Santin (1), M. Pucillo (1) (1) INAF Astronomical Observatory of Trieste (2) INAF Informative
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,
EGEE is a project funded by the European Union under contract IST-2003-508833
www.eu-egee.org NA4 Applications F.Harris(Oxford/CERN) NA4/HEP coordinator EGEE is a project funded by the European Union under contract IST-2003-508833 Talk Outline The basic goals of NA4 The organisation
ATLAS job monitoring in the Dashboard Framework
ATLAS job monitoring in the Dashboard Framework J Andreeva 1, S Campana 1, E Karavakis 1, L Kokoszkiewicz 1, P Saiz 1, L Sargsyan 2, J Schovancova 3, D Tuckett 1 on behalf of the ATLAS Collaboration 1
CMS Dashboard of Grid Activity
Enabling Grids for E-sciencE CMS Dashboard of Grid Activity Julia Andreeva, Juha Herrala, CERN LCG ARDA Project, EGEE NA4 EGEE User Forum Geneva, Switzerland March 1-3, 2006 http://arda.cern.ch ARDA and
Spanish Supercomputing Network
IBERGRID 2008 Spanish Supercomputing Network Francesc Subirada Associate Director Introduction: National Center & Spanish Network The BSC-CNS is the Spanish National Supercomputing Center, created with
GRIP:Creating Interoperability between Grids
GRIP:Creating Interoperability between Grids Philipp Wieder, Dietmar Erwin, Roger Menday Research Centre Jülich EuroGrid Workshop Cracow, October 29, 2003 Contents Motivation Software Base at a Glance
Status and Integration of AP2 Monitoring and Online Steering
Status and Integration of AP2 Monitoring and Online Steering Daniel Lorenz - University of Siegen Stefan Borovac, Markus Mechtel - University of Wuppertal Ralph Müller-Pfefferkorn Technische Universität
Solution for private cloud computing
The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details Use cases By scientist By HEP experiment System requirements and installation How to get it? 2 What
GRMS Features and Benefits
GRMS - The resource management system for Clusterix computational environment Bogdan Ludwiczak [email protected] Poznań Supercomputing and Networking Center Outline: GRMS - what it is? GRMS features
Grid e-services for Multi-Layer SOM Neural Network Simulation
Grid e-services for Multi-Layer SOM Neural Network Simulation,, Rui Silva Faculdade de Engenharia 4760-108 V. N. Famalicão, Portugal {rml,rsilva}@fam.ulusiada.pt 2007 Outline Overview Multi-Layer SOM Background
LHC GRID computing in Poland
POLAND LHC GRID computing in Poland Michał Turała IFJ PAN/ ACK Cyfronet AGH, Kraków Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa, 25.10.2007 21.04.2008 1 Outline Computing needs
An approach to grid scheduling by using Condor-G Matchmaking mechanism
An approach to grid scheduling by using Condor-G Matchmaking mechanism E. Imamagic, B. Radic, D. Dobrenic University Computing Centre, University of Zagreb, Croatia {emir.imamagic, branimir.radic, dobrisa.dobrenic}@srce.hr
Building a Private Cloud with Eucalyptus
Building a Private Cloud with Eucalyptus 5th IEEE International Conference on e-science Oxford December 9th 2009 Christian Baun, Marcel Kunze KIT The cooperation of Forschungszentrum Karlsruhe GmbH und
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
Grid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC
EGEE and glite are registered trademarks Enabling Grids for E-sciencE Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC Elisa Lanciotti, Arnau Bria, Gonzalo
DAME Astrophysical DAta Mining Mining & & Exploration Exploration GRID
DAME Astrophysical DAta Mining & Exploration on GRID M. Brescia S. G. Djorgovski G. Longo & DAME Working Group Istituto Nazionale di Astrofisica Astronomical Observatory of Capodimonte, Napoli Department
CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006
CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 1 Introduction Different h/w used for GRID services Various techniques & First
Monitoring Message Passing Applications in the Grid
Monitoring Message Passing Applications in the Grid with GRM and R-GMA Norbert Podhorszki and Peter Kacsuk MTA SZTAKI, Budapest, H-1528 P.O.Box 63, Hungary [email protected], [email protected] Abstract.
How To Use Arcgis For Free On A Gdb 2.2.2 (For A Gis Server) For A Small Business
Esri Middle East and Africa User Conference December 10 12 Abu Dhabi, UAE Understanding ArcGIS in Virtualization and Cloud Environments Marwa Mabrouk Powerful GIS capabilities Delivered as Web services
CNR-INFM DEMOCRITOS and SISSA elab Trieste
elab and the FVG grid Stefano Cozzini CNR-INFM DEMOCRITOS and SISSA elab Trieste Agenda/Aims Present elab ant its computational infrastructure GRID-FVG structure basic requirements technical choices open
Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
STW Open Technology Programme. H2020 Future & Emerging Technology. and. GRANTS WEEK 2015 October 9 th
STW Open Technology Programme and H2020 Future & Emerging Technology GRANTS WEEK 2015 October 9 th 9/12/2010 INDIVIDUAL FUNDING OPPORTUNITIES IN EUROPE 1 SUPPORT FOR RESEARCH FUNDING ACQUISITION AT THE
Grid Computing With FreeBSD
Grid Computing With FreeBSD USENIX ATC '04: UseBSD SIG Boston, MA, June 29 th 2004 Brooks Davis, Craig Lee The Aerospace Corporation El Segundo, CA {brooks,lee}aero.org http://people.freebsd.org/~brooks/papers/usebsd2004/
Cluster Computing at HRI
Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: [email protected] 1 Introduction and some local history High performance computing
Configuration Management of Massively Scalable Systems
1 KKIO 2005 Configuration Management of Massively Scalable Systems Configuration Management of Massively Scalable Systems Marcin Jarząb, Krzysztof Zieliński, Jacek Kosiński SUN Center of Excelence Department
Big Data in BioMedical Sciences. Steven Newhouse, Head of Technical Services, EMBL-EBI
Big Data in BioMedical Sciences Steven Newhouse, Head of Technical Services, EMBL-EBI Big Data for BioMedical Sciences EMBL-EBI: What we do and why? Challenges & Opportunities Infrastructure Requirements
Poland. networking, digital divide andgridprojects. M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland
Poland networking, digital divide andgridprojects M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland M. Turala The Henryk Niewodniczanski Instytut of Nuclear Physics PAN and ACK
Deploying Business Virtual Appliances on Open Source Cloud Computing
International Journal of Computer Science and Telecommunications [Volume 3, Issue 4, April 2012] 26 ISSN 2047-3338 Deploying Business Virtual Appliances on Open Source Cloud Computing Tran Van Lang 1 and
Database Services for Physics @ CERN
Database Services for Physics @ CERN Deployment and Monitoring Radovan Chytracek CERN IT Department Outline Database services for physics Status today How we do the services tomorrow? Performance tuning
Log managing at PIC. A. Bruno Rodríguez Rodríguez. Port d informació científica Campus UAB, Bellaterra Barcelona. December 3, 2013
Log managing at PIC A. Bruno Rodríguez Rodríguez Port d informació científica Campus UAB, Bellaterra Barcelona December 3, 2013 Bruno Rodríguez (PIC) Log managing at PIC December 3, 2013 1 / 21 What will
A quantitative comparison between xen and kvm
Home Search Collections Journals About Contact us My IOPscience A quantitative comparison between xen and kvm This content has been downloaded from IOPscience. Please scroll down to see the full text.
System Requirements Table of contents
Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5
Virtualization of a Cluster Batch System
Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational
Deploying a distributed data storage system on the UK National Grid Service using federated SRB
Deploying a distributed data storage system on the UK National Grid Service using federated SRB Manandhar A.S., Kleese K., Berrisford P., Brown G.D. CCLRC e-science Center Abstract As Grid enabled applications
Global Grid User Support - GGUS - start up schedule
Global Grid User Support - GGUS - start up schedule GDB Meeting 2004-07 07-13 Concept Target: 24 7 support via time difference and 3 support teams Currently: GGUS FZK GGUS ASCC Planned: GGUS USA Support
Grid Activities in Poland
Grid Activities in Poland Jarek Nabrzyski Poznan Supercomputing and Networking Center [email protected] Outline PSNC National Program PIONIER Sample projects: Progress and Clusterix R&D Center PSNC was
