The Grid-it: the Italian Grid Production infrastructure
|
|
|
- Drusilla Wilkins
- 10 years ago
- Views:
Transcription
1 n 1 Maria Cristina Vistoli INFN CNAF, Bologna Italy The Grid-it: the Italian Grid Production infrastructure
2 INFN-Grid goals!promote computational grid technologies research & development: Middleware and grid tools " Through European and national projects # DataGrid, DataTAG, Firb-GRID-it, EGEE, LCG, CoreGRID etc " Internal R&D activities!deploy operate - support the Grid-it Italian Grid Production Infrastructure: Grid as coordinated resource sharing on a large scale for a multi-institutional and dynamic virtual organization! Set up the Italian Grid Production Infrastructure open to the national research community " FIRB: Grid.it astrophysics, geophysics, biomedicine, computational chemistry etc " Research community and industry n 2
3 n 3 INFN-Grid goals!provide operation and support of the EGEE/LCG production infrastructure, national Grids interconnection!promote dissemination activity to gridify scientific applications " GILDA testbed " Genius portal
4 n 4 INFN-GRID participation to EGEE! EGEE SA1 infrastructure operation and support " ROC Regional Operation Center " CIC - Core Infrastructure Center!EGEE JRA1 IT-CZ cluster " Workload management system " Resource access - CE accounting - policy " VOMS! EGEE NA4/NA3/NA2/NA5 " HEP application " Generic application " Dissemination
5 INFN-GRID participation to GRID.IT/FIRB n 5!Set up the national production grid Infrastructure open to the national research community! Grid management and support tools - system " First tools in production " R&D on # Resource Utilization Policies # Data Management # Scientific Data Base grid integration
6 n 6 Italian Grid : Site map TRENTO MILANO UDINE TORINO PADOVA LNL FERRARA CNAF BOLOGNA PISA FIRENZE S.Piero PERUGIA ROMA ROMA2 LNF TRIESTE National Grid! 26 resource centers (number is increasing rapidly) of different size in term of computing and storage! About 2000 CPUs of computing power! 15 Virtual Organizations SASSARI NAPOLI BARI LECCE CAGLIARI COSENZA PALERMO CATANIA LNS
7 Italian Production Grid Computing and Storage Resources n 7
8 Grid-it:Italian Grid Production Services! Resource Brokers: " EGEE/LCG infrastructure open to all EGEE VO (LHC exps, Biomed, MAgic, Planck, Compchem, ESR) # egee-rb-01.cnaf.infn.it ; # grid008g.cnaf.infn.it (DAG enabled) " ATLAS VO egee-rb-02.cnaf.infn.it, egee-rb-05.cnaf.infn.it " CMS VO egee-rb-04.cnaf.infn.it, egee-rb-06.cnaf.infn.it! Replica Location Service for babar, virgo, cdf, planck and other Italian VOs " datatag2.cnaf.infn.it, " To be replaced by LFC: lfcserver.cnaf.infn.it! Voms server: " testbed008.cnaf.infn.it " VOs: infngrid, zeus, cdf, planck, compchem! LDAP SERVER FOR National Vos (bio, inaf, ingv, gridit,theophys, virgo) : " grid-vo.cnaf.infn.it n 8
9 Grid-it:Italian Grid Production Services n 9! MyProxy servers: " testbed013.cnaf.infn.it! User Interfaces: " UIs are not Core servives..., anyway you can find a list of italian UIs # Monitoring:GridICE server: " EGEE/LCG Production infrastructure # " Italian Production Infrastructure # " Atlas # " CMS #
10 n 10 INFN-GRID Middleware release! Based on LCG! Several VOs added (about 15)! Authorization via VOMS + LCAS/LCMAPS! MPI configuration! Cert queue for site functional tests! Gridice includes WNs monitor! AFS support in the WN! DAG jobs support! DGAS accounting system
11 n 11 Grid-it Production Grid!The access point for user and site mangers is the Grid-it Production Grid Website:! " Documentation (prerequisites, middleware installation, upgrade, testing, using the Grid, current applications and VOs, etc.) " Software repository " Monitoring of resources and services (GridICE) " Tools for the CMT (Central Management Team), site managers and supporters (Calendar/downtime manager) " Trouble ticketing system for problems, advisories and suggestions " A knowledge base to complement the documentation
12 n 12 Grid-it: Ticketing system! INFN-GRID ticketing system is used: # from users to ask questions or to communicate troubles; # from system manager to communicate about common grid tasks (ex: upgrading to a new grid release) # from Central Management Team to system manager to notify a problem " Support Groups : " Support Grid Services (RB, RLS, VOMS, GridICE, etc) Group; " Support VOApplications Group (each for every VO:Atlas, CMS, Alice, LHC, Babar, CDF, Virgo,.); " Support Site Group (each for every site) # Operative Groups Operative Central Management Team (CMT); " Operative Release & Deployment Team; Users -> Create a ticket Supporters/Operatives -> Open the ticket Users and/or Supporters/Operatives -> Update an open ticket Supporters/Operatives -> Close the ticket
13 n 13 Grid-it: Operations activities! Operation of the grid infrastructure: " Shifts (Monday to Friday 08:00-20:00): about 20 people from different sites monitor and test the status of the grid resources (STF, others), certify of the italian sites/resources and provide VO and user support " Management of the grid services (general purpose and VO specific) " Participation to CIC-on-Duty shifts! Support (operations, user and VO): " Regional ticketing system " Support groups are defined and operational for experiments and applications " Participation to Support-on-Duty shifts! Release and installation: " currently INFN-GRID release is based on LCG. INFN-GRID contains DAG, DGAS, VOMS, support for afs client and GridIce monitoring for the WNs; it is now in the deployment phase on the italian grid infrastructure " Installation and configuration scripts (YAIM) are provided! Grid Install working group $ Fabric and software management WG
14 n 14 Certification activity! The Central Management Team is responsible of the resource centers certification: checking the functionalities of a site before joining the site to the production grid.! Although all certification jobs are VO independent, the INFNGRID VO is used to perform these jobs;! In particular are checked: # MPI functionalities # GIIS' information consistence; # Local jobs submission (LRMS); # Grid submission with Globus (globus-job-run); # Grid submission with the ResorceBroker; # ReplicaManager functionalities;! In order to certificate a site the CMT uses dedicated grid services: " RB & BDII: gridit-cert-rb.cnaf.infn.it! In this way we avoid to have an uncertified site in the production grid services;
15 Towards glite: Pre-production, Middleware Certification! Pre-production Service " 3 sites in Italy (CNAF,Padova, Bari) ready " CNAF is the pilot site: # LCG release: 1 CE, 1 SE, 3 WN available # glite release: 1 CE, 3 WN, 1 I/O service, 1 VOMS server, 1 WMS node (formerly RB), 1 double-stack WN, 1 R-GMA server and 1 UI available soon " Migration strategies will be tested " Stress test middleware components in particular the WMS system " Open to the experiments applications to verify middleware! Certification Infrastructure " 4 sites (CNAF, Padova, Roma and Torino) " goal: certificate the INFN-GRID release partecipating to the certification of the glite release and adding new, specific middleware components to glite: storrm, G-PBOX " produce and verify the installation procedures n 15
16 n 16 Towards glite: migration! glite Migration " glite release 1 in the pre-production service to verify: # Functionality # Stability # Performance " pre-production open to experiments " INFN-GRID/LCG fall back solution " Production infrastructure migration to glite step by step # First: migration from RB to gllite WMS # Then CE / WNs: " Major sites deploy glite CEs in parallel with the LCG-2 CEs. " Some of the smaller sites convert fully to glite.
17 n 17 Current activities! Production service quality and stability improvements " Improve monitoring systems: Gridice with reactive alarms and notification, new release " Site isolation - need simple mechanism to remove sites " Improve the grid management and monitoring structure! INFN-GRID Support infrastructure: # VO support - Experiments # Site administartors # Grid services support! Accounting and policy system management
18 Resource centers status: Feb n 18
19 Resource centers status: Mar n 19
20 n 20 Graphs of resources utilization Default param. shown Resources usage in the last week per all site and all VO Site, virtual oraganization, time interval, on/off show values on bar, size picture
21 n 21 Jobs waiting and jobs running Default view: number of jobs in the last day per all site and all VO
22 n 22 Number of jobs CPU hours Used memory (average job size) Wall Time hours
23 Executed Jobs in the last three weeks per site n 23
24 Site specific data: CPU hours per VO n 24
25 n 25 Useful links! Grid-it Production Grid " Grid-it Monitoring/Management GridICE " Grid-it test and certification " Grid-it Support " Contact " " " Ticket for operational issue
Report from Italian ROC
Report from Italian ROC Paolo Veronesi for ROC It www.eu-egee.org ARM-7, ROC(111) 15th - 17th May 2006 - Krakow, Outline Changes in ROC structure over the last 3 months (people/institutes involved) People
CMS Dashboard of Grid Activity
Enabling Grids for E-sciencE CMS Dashboard of Grid Activity Julia Andreeva, Juha Herrala, CERN LCG ARDA Project, EGEE NA4 EGEE User Forum Geneva, Switzerland March 1-3, 2006 http://arda.cern.ch ARDA and
The GENIUS Grid Portal
The GENIUS Grid Portal (*) work in collaboration with A. Falzone and A. Rodolico EGEE NA4 Workshop, Paris, 18.12.2003 CHEP 2000, 10.02.2000 Outline Introduction Grid portal architecture and requirements
Sun Grid Engine, a new scheduler for EGEE
Sun Grid Engine, a new scheduler for EGEE G. Borges, M. David, J. Gomes, J. Lopez, P. Rey, A. Simon, C. Fernandez, D. Kant, K. M. Sephton IBERGRID Conference Santiago de Compostela, Spain 14, 15, 16 May
The ENEA-EGEE site: Access to non-standard platforms
V INFNGrid Workshop Padova, Italy December 18-20 2006 The ENEA-EGEE site: Access to non-standard platforms C. Sciò**, G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, F. Simoni, S. Podda
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
GRIDSEED: A Virtual Training Grid Infrastructure
GRIDSEED: A Virtual Training Grid Infrastructure I. Gregori, M. Patil and S. Cozzini SISSA elab, Trieste, Italy CNR/INFM Democritos National Simulation Center, Trieste, Italy Lecture given at the Joint
DSA1.4 R EPORT ON IMPLEMENTATION OF MONITORING AND OPERATIONAL SUPPORT SYSTEM. Activity: SA1. Partner(s): EENet, NICPB. Lead Partner: EENet
R EPORT ON IMPLEMENTATION OF MONITORING AND OPERATIONAL SUPPORT SYSTEM Document Filename: Activity: Partner(s): Lead Partner: Document classification: BG-DSA1.4-v1.0-Monitoring-operational-support-.doc
Sun Grid Engine, a new scheduler for EGEE middleware
Sun Grid Engine, a new scheduler for EGEE middleware G. Borges 1, M. David 1, J. Gomes 1, J. Lopez 2, P. Rey 2, A. Simon 2, C. Fernandez 2, D. Kant 3, K. M. Sephton 4 1 Laboratório de Instrumentação em
Global Grid User Support - GGUS - in the LCG & EGEE environment
Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni ([email protected]) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New
John Walsh and Brian Coghlan Grid Ireland /e INIS/EGEE [email protected]
Grid Computing and EGEE John Walsh and Brian Coghlan Grid Ireland /e INIS/EGEE [email protected] www.eu egee.org EGEE III/Grid Ireland EGEE and glite are registered trademarks Background Grid is an
Status and Integration of AP2 Monitoring and Online Steering
Status and Integration of AP2 Monitoring and Online Steering Daniel Lorenz - University of Siegen Stefan Borovac, Markus Mechtel - University of Wuppertal Ralph Müller-Pfefferkorn Technische Universität
DSA1.5 U SER SUPPORT SYSTEM
DSA1.5 U SER SUPPORT SYSTEM H ELP- DESK SYSTEM IN PRODUCTION AND USED VIA WEB INTERFACE Document Filename: Activity: Partner(s): Lead Partner: Document classification: BG-DSA1.5-v1.0-User-support-system.doc
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
Auto-administration of glite-based
2nd Workshop on Software Services: Cloud Computing and Applications based on Software Services Timisoara, June 6-9, 2011 Auto-administration of glite-based grid sites Alexandru Stanciu, Bogdan Enciu, Gabriel
Instruments in Grid: the New Instrument Element
Instruments in Grid: the New Instrument Element C. Vuerli (1,2), G. Taffoni (1,2), I. Coretti (1), F. Pasian (1,2), P. Santin (1), M. Pucillo (1) (1) INAF Astronomical Observatory of Trieste (2) INAF Informative
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
Status and Evolution of ATLAS Workload Management System PanDA
Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The
SEE-GRID-SCI. www.see-grid-sci.eu. SEE-GRID-SCI USER FORUM 2009 Turkey, Istanbul 09-10 December, 2009
SEE-GRID-SCI Grid Site Monitoring tools developed and used at SCL www.see-grid-sci.eu SEE-GRID-SCI USER FORUM 2009 Turkey, Istanbul 09-10 December, 2009 V. Slavnić, B. Acković, D. Vudragović, A. Balaž,
CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006
CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 1 Introduction Different h/w used for GRID services Various techniques & First
Client/Server Grid applications to manage complex workflows
Client/Server Grid applications to manage complex workflows Filippo Spiga* on behalf of CRAB development team * INFN Milano Bicocca (IT) Outline Science Gateways and Client/Server computing Client/server
DCMS Tier 2/3 prototype infrastructure
DCMS Tier 2/3 prototype infrastructure 1 Anja Vest, Uni Karlsruhe DCMS Meeting in Aachen, Overview LCG Queues/mapping set up Hardware capacities Supported software Summary DCMS overview CMS DCMS: Tier
HTCondor within the European Grid & in the Cloud
HTCondor within the European Grid & in the Cloud Andrew Lahiff STFC Rutherford Appleton Laboratory HEPiX 2015 Spring Workshop, Oxford The Grid Introduction Computing element requirements Job submission
EGEE is a project funded by the European Union under contract IST-2003-508833
www.eu-egee.org NA4 Applications F.Harris(Oxford/CERN) NA4/HEP coordinator EGEE is a project funded by the European Union under contract IST-2003-508833 Talk Outline The basic goals of NA4 The organisation
An objective comparison test of workload management systems
An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: [email protected] Abstract. The Grid
The glite File Transfer Service
Enabling Grids Enabling for E-sciencE Grids for E-sciencE The glite File Transfer Service Paolo Badino On behalf of the JRA1 Data Management team EGEE User Forum - CERN, 2 Mars 2006 www.eu-egee.org Outline
Grid Scheduling Architectures with Globus GridWay and Sun Grid Engine
Grid Scheduling Architectures with and Sun Grid Engine Sun Grid Engine Workshop 2007 Regensburg, Germany September 11, 2007 Ignacio Martin Llorente Javier Fontán Muiños Distributed Systems Architecture
Virtualization, Grid, Cloud: Integration Paths for Scientific Computing
Virtualization, Grid, Cloud: Integration Paths for Scientific Computing Or, where and how will my efficient large-scale computing applications be executed? D. Salomoni, INFN Tier-1 Computing Manager [email protected]
Grid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
Roberto Barbera. Centralized bookkeeping and monitoring in ALICE
Centralized bookkeeping and monitoring in ALICE CHEP INFN 2000, GRID 10.02.2000 WP6, 24.07.2001 Roberto 1 Barbera ALICE and the GRID Phase I: AliRoot production The GRID Powered by ROOT 2 How did we get
Service Challenge Tests of the LCG Grid
Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources
Interoperating Cloud-based Virtual Farms
Stefano Bagnasco, Domenico Elia, Grazia Luparello, Stefano Piano, Sara Vallero, Massimo Venaruzzo For the STOA-LHC Project Interoperating Cloud-based Virtual Farms The STOA-LHC project 1 Improve the robustness
CDFII Computing Status
CDFII Computing Status OUTLINE: New CDF-Italy computing group organization Usage status at FNAL and CNAF Towards GRID: where we are Plans and requests 22/04/2005 Donatella Lucchesi 1 CDFII Computing group
The dashboard Grid monitoring framework
The dashboard Grid monitoring framework Benjamin Gaidioz on behalf of the ARDA dashboard team (CERN/EGEE) ISGC 2007 conference The dashboard Grid monitoring framework p. 1 introduction/outline goals of
An approach to grid scheduling by using Condor-G Matchmaking mechanism
An approach to grid scheduling by using Condor-G Matchmaking mechanism E. Imamagic, B. Radic, D. Dobrenic University Computing Centre, University of Zagreb, Croatia {emir.imamagic, branimir.radic, dobrisa.dobrenic}@srce.hr
Computing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
Report on WorkLoad Management activities
APROM CERN, Monday 26 November 2004 Report on WorkLoad Management activities Federica Fanzago, Marco Corvo, Stefano Lacaprara, Nicola de Filippis, Alessandra Fanfani [email protected] INFN Padova,
Recommendations for Static Firewall Configuration in D-Grid
D-Grid Integrationsprojekt (DGI-2) Fachgebiet 3-3 Firewalls Recommendations for Static Firewall Configuration in D-Grid Version 1.5, 21. Mai 2008 D-Grid Integrationsprojekt (DGI-2) Autoren: Gian Luca Volpato
CNR-INFM DEMOCRITOS and SISSA elab Trieste
elab and the FVG grid Stefano Cozzini CNR-INFM DEMOCRITOS and SISSA elab Trieste Agenda/Aims Present elab ant its computational infrastructure GRID-FVG structure basic requirements technical choices open
Global Grid User Support - GGUS - start up schedule
Global Grid User Support - GGUS - start up schedule GDB Meeting 2004-07 07-13 Concept Target: 24 7 support via time difference and 3 support teams Currently: GGUS FZK GGUS ASCC Planned: GGUS USA Support
Database Services for Physics @ CERN
Database Services for Physics @ CERN Deployment and Monitoring Radovan Chytracek CERN IT Department Outline Database services for physics Status today How we do the services tomorrow? Performance tuning
ATLAS job monitoring in the Dashboard Framework
ATLAS job monitoring in the Dashboard Framework J Andreeva 1, S Campana 1, E Karavakis 1, L Kokoszkiewicz 1, P Saiz 1, L Sargsyan 2, J Schovancova 3, D Tuckett 1 on behalf of the ATLAS Collaboration 1
OPERATIONS STRUCTURE FOR THE MANAGEMENT, CONTROL AND SUPPORT OF THE INFN GRID/GRID.IT PRODUCTION INFRASTRUCTURE
OPERATIONS STRUCTURE FOR THE MANAGEMENT, CONTROL AND SUPPORT OF THE INFN GRID/GRID.IT PRODUCTION INFRASTRUCTURE Cristina Aiftimiei (a), Sabrina Argentati (b), Stefano Bagnasco (c), Alex Barchiesi (d),
Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb
Batch and Cloud overview Andrew McNab University of Manchester GridPP and LHCb Overview Assumptions Batch systems The Grid Pilot Frameworks DIRAC Virtual Machines Vac Vcycle Tier-2 Evolution Containers
ARDA Experiment Dashboard
ARDA Experiment Dashboard Ricardo Rocha (ARDA CERN) on behalf of the Dashboard Team www.eu-egee.org egee INFSO-RI-508833 Outline Background Dashboard Framework VO Monitoring Applications Job Monitoring
LHC schedule: what does it imply for SRM deployment? [email protected]. CERN, July 2007
WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? [email protected] WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.
The glite Workload Management System
Consorzio COMETA - Progetto PI2S2 FESR The glite Workload Management System Annamaria Muoio INFN Catania Italy [email protected] Tutorial per utenti e sviluppo di applicazioni in Grid 16-20 July
Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
AN APPROACH TO DEVELOPING BUSINESS PROCESSES WITH WEB SERVICES IN GRID
AN APPROACH TO DEVELOPING BUSINESS PROCESSES WITH WEB SERVICES IN GRID R. D. Goranova 1, V. T. Dimitrov 2 Faculty of Mathematics and Informatics, University of Sofia S. Kliment Ohridski, 1164, Sofia, Bulgaria
Welcome to the User Support for EGEE Task Force Meeting
Welcome to the User Support for EGEE Task Force Meeting The agenda is as follows: Welcome Note & Presentation of the current GGUS Support system Basic Support Model Coffee brake Processes Lunch Break Interfaces
(RH 7.3, gcc 2.95.2,VDT 1.1.6, EDG 1.4.3, GLUE, RLS) Tokyo BNL TAIWAN RAL 20/03/2003 20/03/2003 CERN 15/03/2003 15/03/2003 FNAL 10/04/2003 CNAF
Our a c t i v i t i e s & c o n c e rn s E D G - L C G t r a n s i t i o n / c o n v e r g e n c e p l a n EDG s i d e : i n t e g r a t i o n o f n e w m i d d l e w a r e, t e s t b e d e v a l u a t
Bob Jones Technical Director [email protected]
Bob Jones Technical Director [email protected] CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST-2003-508833 EGEE Goal & Strategy Goal: Create a wide
Virtualisation Cloud Computing at the RAL Tier 1. Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013
Virtualisation Cloud Computing at the RAL Tier 1 Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013 Virtualisation @ RAL Context at RAL Hyper-V Services Platform Scientific Computing Department
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,
