OPERATIONS STRUCTURE FOR THE MANAGEMENT, CONTROL AND SUPPORT OF THE INFN GRID/GRID.IT PRODUCTION INFRASTRUCTURE
|
|
|
- Lucas Wilkins
- 10 years ago
- Views:
Transcription
1 OPERATIONS STRUCTURE FOR THE MANAGEMENT, CONTROL AND SUPPORT OF THE INFN GRID/GRID.IT PRODUCTION INFRASTRUCTURE Cristina Aiftimiei (a), Sabrina Argentati (b), Stefano Bagnasco (c), Alex Barchiesi (d), Riccardo Brunetti (c), Andrea Caltroni (a), Luciana Carota (e), Alessandro Cavalli (e), Daniele Cesini (e), Guido Cuscela (f), Marcio Da Cruz (c), Simone Dalla Fina (a), Cesare Delle Fratte (g), Giacinto Donvito (f), Sergio Fantinel (a), Federica Fanzago (a), Enrico Ferro (a), Sandro Fiore (i), Luciano Gaido (c), Francesco Gregoretti (l), Federico Nebiolo (c), Alfredo Pagano (e), Alessandro Paolini (e), Matteo Selmi (e), Rosario Turrisi (a), Luca Vaccarossa (h), Marco Verlato (a), Paolo Veronesi (e), Maria Cristina Vistoli (e) Istituto Nazionale di Fisica Nucleare (INFN), Sez. di Padova (a), Lab. Naz. di Frascati (b),sez. di Torino (c), Sez. di Roma 1 (d), CNAF Bologna (e), Sez. di Bari (f), Sez. di Roma 2 (g), Sez. di Milano (h), Italy (i) University of Lecce and SPACI Consortium, Italy (l) Istituto di Calcolo e Reti ad Alte Prestazioni Consiglio Nazionale delle Ricerche (ICAR CNR), Italy Abstract Moving from a National Grid Testbed to a Production quality Grid service for the HEP applications requires effective operations structure and organization, proper user and operations support, flexible and efficient management and monitoring tools. Moreover, easily deployable middleware releases should be provided to the sites, with flexible configuration tools suitable for heterogeneous local computing farms. The organizational model, the available tools and the agreed procedures for operating the national/regional Grid infrastructures that are also part of the world wide EGEE Grid play a key role for the success of a real production Grid, as well as the interconnection of the regional operations structures with the global management, control and support structure. In this paper we describe the operations structure that we are currently using at the Italian Grid Operation and Support Centre. The activities described cover monitoring, management and support for the INFN GRID/GRID.IT production Grid (comprising more than 30 sites) and its interconnection with the EGEE/LCG structure as well as the roadmap to improve the global support quality, stability and reliability of the Grid service. THE NATIONAL GRID CENTRAL MANAGEMENT TEAM The Central Management Team The Central Management Team (CMT) is responsible for the registration procedure, middleware deployment and certification procedure for all INFN GRID sites. Registration Procedure: the Site Manager at the candidate site contacts the Italian ROC (Regional Operation Centre) CMT, giving contact information and service level definition numbers and a statement of acceptance of the policy documents. When the CMT agrees it is a good candidate site, the ROC operator creates the new site's record in the GOC (Grid Operation Centre [1]) database. At this stage the site status is marked as candidate. The Resource Administrator at the candidate site enters the remaining information in the GOC database then requests validation by the ROC. If all required site information has been provided, the ROC manager changes the site status to uncertified. Deployment Plan: services deployment and Grid sites installation are scheduled according to a plan set forth by the Central Management Team. Accordance to the plan is mandatory to avoid having too many site upgrades during the day and mainly to ensure that the service level offered to users and Virtual Organisations (VOs) is acceptable even during the upgrade. A deployment plan is published whenever a new middleware release is available. Using the Italian ticketing system, site managers then publish downtime advices to the INFN GRID calendar and, if a site is also registered in the EGEE GOC database, set downtime periods in the GOC database and announce it through the EGEE broadcast tool to all Virtual Organisation s members. The site s status is then changed to closed in the site Information System. In a typical Grid site, installing a large number of computers with the same characteristics (e.g. a set of Worker Nodes) is a common issue. In this case the CMT provides the site an automatic method to setup the farm,
2 where each node has the same basic configuration: the aim is to simplify the Grid middleware deployment in the INFN GRID/GRID.IT infrastructure. The procedure is provided through documents describing how to setup a server for RedHat OS installation via Kickstart using PXE technology and to manage a local APT repository for OS and middleware. To manage the package repository a customized version of YAM, a tool to mirror RPM repositories [2], is used, while the OS installation is based on normal Kickstart files. The tool YAIM [3] is used to install/upgrade the middleware release. Site Certification Procedure: having completed the procedures described above, the Resource Administrators at the site should: Apply for DTEAM and INFNGRID VO membership to allow test job submission to check the completeness of the local installation. Contact the CMT and ask for quality testing of the site installation. Request the CMT to perform acceptance tests before including the new site in the Information System. The site will be included in the Grid information system configuration files. CMT then changes site status in the GOC database to certified and the site is included in the daily monitoring procedures. ROC shifts The Italian ROC provides daily shifts to control and manage the Italian production Grid infrastructure. The shifts are organized with 2 people per shift and 2 shifts per day, Monday to Friday. The team on duty logs all the actions in a web-based logbook. The log is passed to the next shift to provide a summary of the current status of the Grid. The open/updated/closed trouble tickets, the status of Grid services and sites and the sites successful certifications are logged. During the shift the certification procedure is started for those sites whose downtime advice expired. GridICE, a grid monitoring tool developed in the EGEE/LCG framework [4], is used to check the status of the production Grid services and information systems in all the computing and storage resources. Shifters on duty also use the Site Functional Tests page [5] to check the status of Italian production sites, and look at the GIIS status in all the production sites by accessing the LCG monitoring map [6] and the Italian top level BDII (Berkeley Database Information Index). USER REPORT AND ACCOUNTING The DGAS Accounting Service The Distributed Grid Accounting Service (DGAS) [7], developed within European DataGRID project and EGEE project, implements a resource usage metering and economic accounting in a fully distributed Grid environment. Each Grid computing resource and Grid user is registered in an appropriate server, known as HLR (Home Location Register), which keeps track of every submitted job. An arbitrary number of HLR servers can be used. By querying the HLR, it is possible to collect usage information on a single user or VO or group of users basis. Usage records of each completed job are replicated both on users HLR and resource HLR. The first can be used by VO managers to collect information on user s activity, while the second can be queried by site managers to monitor usage of their computing resources. The DGAS service has been included in the INFN GRID middleware releases starting with version Figure 1: DGAS architecture The deployed software includes the services to be installed on the Computing Element (CE) and the Worker Node (WN), command line tools to query the usage record databases to be installed on the User Interface (UI) and the packages to install the HLR. A tool to monitor the stability of the system has also been prepared. The tool checks the functionality of the sensors and services running on the CE and the correct communication between CE and HLR. The main DGAS services on the CEs are checked, in particular those responsible to collect usage records and to send information to the HLR; afterwards, a simple test job is submitted on each site of the production Grid. For sake of simplicity a single HLR is used for all the sites, but the target HLR can be easily specified inside the Job Description Language (JDL) file. After the test is executed, the corresponding accounting information is checked, using the appropriate software client to query the users HLR. This tool is currently used by ROC shifters during their daily activity to discover possible problems and to allow a quick fix. The reporting tool
3 In order to provide usage graphs, VO comparison metrics, global number of jobs executed in the production Grid infrastructure, a reporting tool called ROCRep has been developed [8]. It is possible to report the number of submitted jobs per site and per Virtual Organisation. ROCRep can generate either a tabular report with detailed information or graphical report based on pie charts and line graphs. It is also possible to report the total Wall Clock Time and CPU Time used, again per site and per VO. Furthermore, trend curves of executed jobs submitted in a selected set of sites are also available, making it possible, for example, to check whether during a specific day there have been problems or, in general, Grid resources have been overloaded or underexploited. Another feature of ROCRep is a SLA (Service Level Agreement) service: thanks to this feature, it is possible to monitor whether the amount of resources provided by a site for the Grid remains stable. A chart is also provided for this purpose. Reporting data currently originate from the GridICE monitoring tool database. In the future, starting with the INFN GRID deployment release, ROCRep will collect data directly both from the accounting system DGAS and from GridICE. rights in order to give local support and to populate documentation sections. XHelp is the main component of the suite, implementing a full helpdesk system with highly configurable ACLs (grouping users, site administrators, VO supporters, Central Management Team members, services technicians), unlimited custom fields (associating specific details with tickets), modifiable ticket workflow, per-user profile for tickets lists (searches can be saved and displayed in the summary page) and notification system ( , web-based messaging, RSS feed). All components are accessible from the main interface of the portal, providing a single sign on point of registration/identification certificate based. The end user can open a request, view and follow his own tickets and related replies; a supporter can view tickets assigned to his own groups, add responses and solutions, and change status/priority; a ticket manager can moreover work with attachments, with FAQ and with ticket assignment (to other supporters and groups). While operating on tickets, side content is always available for all classes of users (related to their access level): Site Functional Tests in RSS form, site downtimes calendaring system, file archive, net query tools, IRC applet, contextual questions and answers, reports from CMT daily shifts. Administration of all the components is web based too, enabling a restricted group of users to manage and modify behaviour of specific elements from a userfriendly GUI. The whole system is written in object oriented PHP, making it easy to extend single components, deactivate unneeded ones, hook modules among them, and add specific functionalities (like the GGUS interface described below). The whole suite is developed as an Open Source project, and specific ROC features are kept in sync with the official branch of the project, taking the advantages of both locally developed components and external updates and bug fixes. Integration with GGUS Figure 2: ROCRep graphic reports USER AND OPERATION SUPPORT Italian ROC helpdesk Italian ROC ticketing system is built upon Xoops [9], a suite of web based tools written in PHP. A complete portal with a large number of features is available for both Grid end users and Resource Centres administrators. The former can consult the knowledge base or the Wiki to retrieve useful information and try to solve their problem by themselves; the latter can have fine grained tuneable Within EGEE, the ROC support infrastructures are connected via a central integration platform provided by the Global Grid User Support (GGUS) organisation. The GGUS model is described in detail in another paper presented at this conference [10]. Concerning the Italian ROC, the XHelp module described above has been interfaced to the GGUS helpdesk application using web service technologies [11]. Secure methods to create and update trouble tickets in the GGUS database are provided by the GGUS application. These methods are called by APIs available in most programming languages, which wrap into SOAP messages the ticket information stored in the XHelp database, and send them to the WSDL contact URL. This
4 way, a trouble ticket submitted by a local user to the XHelp helpdesk which could not be solved locally, can be escalated by the local supporter across the ROC boundaries. The system allows for ticket assignment to any other support unit of GGUS, as well as all other ROC helpdesks connected to GGUS via the interface. In this case, the ticket is shared among all the helpdesk s databases involved in the workflow, it can be updated from every source, and any updating action will propagate towards all the other systems. On the other direction, trouble tickets originally submitted to GGUS can be assigned to the Italian ROC, triggering the automatic appearance of the ticket in the XHelp database, as well as the e mail notification of the relevant support team. In this case, the GGUS ticket is sent to XHelp through an XML formatted e mail, which is at first fetched from the mail server and then parsed by an importer tool running as a cron job on the XHelp server. The tool, written in Java, saves the ticket into the XHelp database, marking it as new if not existing yet locally, or updating an already existing ticket. The possibility of replacing the importer tool with a web services layer on top of XHelp, in order to allow synchronous transfer of tickets from GGUS to XHelp, in the same way as is done in the opposite direction, is under development and planned to be implemented within early months of Doria (1), Alessandro Enea (10), Italo Epicoco (8), Francesco Fabozzi (1), Alessandra Fanfani (1), Enrico Fasanelli (1), Maria Lorenza Ferrer (1), Vega Forneris (5), Antonio Forte (1), Riccardo Gargana (1), Osvaldo Gervasi (2), Riccardo Gervasoni (1), Antonia Ghiselli (1), Alessandro Italiano (1), Stefano Lusso (1), Marisa Luvisetto (1), Tullio Macorini (1), Giorgio Maggi (1), Nicolo' Magini (1), Mirco Mariotti (1), Fabio Martinelli (1), Enrico Mazzoni (1), Mirco Mazzucato (1), Massimo Mongardini (1), Daniele Mura (1), Igor Neri (1), Cristiano Palomba (1), Andrea Paolucci (1), Antonio Pierro (1), Lucio Pileggi (6), Giuseppe Platania (1), Silvia Resconi (1), Diego Romano (4), Davide Salomoni (1), Francesco Safai Tehrani (1), Franco Semeria (1), Marco Serra (1), Leonello Servoli (1), Antonio Silvestri (1), Alessandro Spanu (1), Lucio Strizzolo (1), Giuliano Taffoni (7), Francesco Taurino (1), Gennaro Tortone (1), Claudio Vuerli (7) (1) Istituto Nazionale di Fisica Nucleare (INFN), Italy (2) University of Perugia, Italy (3) University of Pisa, Italy (4) University of Naples, Italy (5) ESA s European Space Research Institute (ESRIN), Italy (6) Scuola Normale Superiore di Pisa, Italy (7) Istituto Nazionale di Astrofisica (INAF), Italy (8) University of Lecce and SPACI Consortium, Italy (9) Ente per le Nuove tecnologie, l Energia e l Ambiente (ENEA), Italy (10) Istituto di Linguistica Computazionale Consiglio Nazionale delle Ricerche (ILC CNR), Pisa, Italy Figure 3: Basic GGUS-ROC ticket workflow ACKNOWLEDGEMENTS We thank the Resource Centres administrators and the INFN GRID/GRID.IT management: Roberta Amaranti (1), Daniele Andreotti (1), Roberto Barbera (1), Alessandro Barilari (2), Massimo Biasotto (1), Tommaso Boccali (1), Giovanni Bracco (9), Alessandro Brunengo (1), Federico Calzolari (6), Antonio Cazzato (1), Andrea Chierici (1), Giovanni Ciraolo (1), Vitaliano Ciulli (1), Mirko Corosu (1), Marco Corvo (1), Stefano Dal Pra (1), Maurizio Davini (3), Luca Dell Agnello (1), Alessandro De Salvo (1), Matteo Diarena (2), Flavia Donno (1), Alessandra REFERENCES [1] support.ac.uk/gridsite/gocmain/ [2] [3] deployment.web.cern.ch/griddeployment/documentation/lcg2 Manual Install/ [4] C. Aiftimiei, S. Andreozzi, G. Cuscela, N. De Bortoli, G. Donvito, S. Fantinel, E. Fattibene, G. Misurelli, A. Pierro, G.L. Rubini, G.Tortone, GridICE: Requirements, Architecture and Experience of a Monitoring Tool for Grid Systems, to appear in Proceedings of the International Conference on Computing in High Energy and Nuclear Physics (CHEP2006), Mumbai, India, February 2006 [5] sft.cern.ch:9443/sft/lastreport.cgi [6] [7] R. Piro, A. Guarise, and A. Werbrouck, An Economy based Accounting Infrastructure for the DataGrid, Proceedings of the 4th International Workshop on Grid Computing (Grid2003), Phoenix, Arizona, USA, November 17th, 2003
5 [8] it.cnaf.infn.it/rocrep [9] [10] P. Strange, T. Antoni, F. Donno, H. Dres, G. Mathieu, A. Mills, D. Spence, M. Tsai and M. Verlato, Global Grid User Support: the model and experience in the Worldwide LHC Computing Grid, to appear in Proceedings of the International Conference on Computing in High Energy and Nuclear Physics (CHEP2006), Mumbai, India, February 2006 [11] GGUS ROC Interface Project Home,
Report from Italian ROC
Report from Italian ROC Paolo Veronesi for ROC It www.eu-egee.org ARM-7, ROC(111) 15th - 17th May 2006 - Krakow, Outline Changes in ROC structure over the last 3 months (people/institutes involved) People
The Grid-it: the Italian Grid Production infrastructure
n 1 Maria Cristina Vistoli INFN CNAF, Bologna Italy The Grid-it: the Italian Grid Production infrastructure INFN-Grid goals!promote computational grid technologies research & development: Middleware and
Global Grid User Support - GGUS - in the LCG & EGEE environment
Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni ([email protected]) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New
Global Grid User Support - GGUS - start up schedule
Global Grid User Support - GGUS - start up schedule GDB Meeting 2004-07 07-13 Concept Target: 24 7 support via time difference and 3 support teams Currently: GGUS FZK GGUS ASCC Planned: GGUS USA Support
Welcome to the User Support for EGEE Task Force Meeting
Welcome to the User Support for EGEE Task Force Meeting The agenda is as follows: Welcome Note & Presentation of the current GGUS Support system Basic Support Model Coffee brake Processes Lunch Break Interfaces
Report on WorkLoad Management activities
APROM CERN, Monday 26 November 2004 Report on WorkLoad Management activities Federica Fanzago, Marco Corvo, Stefano Lacaprara, Nicola de Filippis, Alessandra Fanfani [email protected] INFN Padova,
The ENEA-EGEE site: Access to non-standard platforms
V INFNGrid Workshop Padova, Italy December 18-20 2006 The ENEA-EGEE site: Access to non-standard platforms C. Sciò**, G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, F. Simoni, S. Podda
Interoperating Cloud-based Virtual Farms
Stefano Bagnasco, Domenico Elia, Grazia Luparello, Stefano Piano, Sara Vallero, Massimo Venaruzzo For the STOA-LHC Project Interoperating Cloud-based Virtual Farms The STOA-LHC project 1 Improve the robustness
DSA1.5 U SER SUPPORT SYSTEM
DSA1.5 U SER SUPPORT SYSTEM H ELP- DESK SYSTEM IN PRODUCTION AND USED VIA WEB INTERFACE Document Filename: Activity: Partner(s): Lead Partner: Document classification: BG-DSA1.5-v1.0-User-support-system.doc
Status and Integration of AP2 Monitoring and Online Steering
Status and Integration of AP2 Monitoring and Online Steering Daniel Lorenz - University of Siegen Stefan Borovac, Markus Mechtel - University of Wuppertal Ralph Müller-Pfefferkorn Technische Universität
Roberto Barbera. Centralized bookkeeping and monitoring in ALICE
Centralized bookkeeping and monitoring in ALICE CHEP INFN 2000, GRID 10.02.2000 WP6, 24.07.2001 Roberto 1 Barbera ALICE and the GRID Phase I: AliRoot production The GRID Powered by ROOT 2 How did we get
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
ManageEngine SupportCenter Plus 7.7 Edition Comparison
ManageEngine SupportCenter Plus 7.7 Edition Comparison Detailed Comparison of SupportCenter Plus 7.7 Editions. Zoho Corporation Official Document. ManageEngine is a part of Zoho Corporation ( Formerly
Instruments in Grid: the New Instrument Element
Instruments in Grid: the New Instrument Element C. Vuerli (1,2), G. Taffoni (1,2), I. Coretti (1), F. Pasian (1,2), P. Santin (1), M. Pucillo (1) (1) INAF Astronomical Observatory of Trieste (2) INAF Informative
CMS Dashboard of Grid Activity
Enabling Grids for E-sciencE CMS Dashboard of Grid Activity Julia Andreeva, Juha Herrala, CERN LCG ARDA Project, EGEE NA4 EGEE User Forum Geneva, Switzerland March 1-3, 2006 http://arda.cern.ch ARDA and
Magic-5. Medical Applications in a GRID Infrastructure Connection. Ivan De Mitri* on behalf of MAGIC-5 collaboration
Magic-5 Medical Applications in a GRID Infrastructure Connection Ivan De Mitri* on behalf of MAGIC-5 collaboration *Dipartimento di Fisica dell Università di Lecce and Istituto Nazionale di Fisica Nucleare,
The GENIUS Grid Portal
The GENIUS Grid Portal (*) work in collaboration with A. Falzone and A. Rodolico EGEE NA4 Workshop, Paris, 18.12.2003 CHEP 2000, 10.02.2000 Outline Introduction Grid portal architecture and requirements
SEE-GRID-SCI. www.see-grid-sci.eu. SEE-GRID-SCI USER FORUM 2009 Turkey, Istanbul 09-10 December, 2009
SEE-GRID-SCI Grid Site Monitoring tools developed and used at SCL www.see-grid-sci.eu SEE-GRID-SCI USER FORUM 2009 Turkey, Istanbul 09-10 December, 2009 V. Slavnić, B. Acković, D. Vudragović, A. Balaž,
DSA1.4 R EPORT ON IMPLEMENTATION OF MONITORING AND OPERATIONAL SUPPORT SYSTEM. Activity: SA1. Partner(s): EENet, NICPB. Lead Partner: EENet
R EPORT ON IMPLEMENTATION OF MONITORING AND OPERATIONAL SUPPORT SYSTEM Document Filename: Activity: Partner(s): Lead Partner: Document classification: BG-DSA1.4-v1.0-Monitoring-operational-support-.doc
Architecture of a Network Monitoring Element
Architecture of a Network Monitoring Element Augusto Ciuffoletti 1 and Michalis Polychronakis 2 1 CNAF-INFN, Bologna, Italy 2 FORTH-ICS, Heraklio, Greece Abstract. A Network Monitoring system is a vital
Virtualization, Grid, Cloud: Integration Paths for Scientific Computing
Virtualization, Grid, Cloud: Integration Paths for Scientific Computing Or, where and how will my efficient large-scale computing applications be executed? D. Salomoni, INFN Tier-1 Computing Manager [email protected]
Virtual digital libraries: The DILIGENT Project. Donatella Castelli ISTI-CNR, Italy
Virtual digital libraries: The DILIGENT Project Donatella Castelli ISTI-CNR, Italy The DLs evolution Dynamic Universal Knowledge Environment 2005 Repository of digital texts + search service 1996 Large
DAME Astrophysical DAta Mining Mining & & Exploration Exploration GRID
DAME Astrophysical DAta Mining & Exploration on GRID M. Brescia S. G. Djorgovski G. Longo & DAME Working Group Istituto Nazionale di Astrofisica Astronomical Observatory of Capodimonte, Napoli Department
MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper
Migrating Desktop and Roaming Access Whitepaper Poznan Supercomputing and Networking Center Noskowskiego 12/14 61-704 Poznan, POLAND 2004, April white-paper-md-ras.doc 1/11 1 Product overview In this whitepaper
Sun Grid Engine, a new scheduler for EGEE
Sun Grid Engine, a new scheduler for EGEE G. Borges, M. David, J. Gomes, J. Lopez, P. Rey, A. Simon, C. Fernandez, D. Kant, K. M. Sephton IBERGRID Conference Santiago de Compostela, Spain 14, 15, 16 May
Client/Server Grid applications to manage complex workflows
Client/Server Grid applications to manage complex workflows Filippo Spiga* on behalf of CRAB development team * INFN Milano Bicocca (IT) Outline Science Gateways and Client/Server computing Client/server
SapphireIMS 4.0 Service Desk Feature Specification
SapphireIMS 4.0 Service Desk Feature Specification v1.4 All rights reserved. COPYRIGHT NOTICE AND DISCLAIMER No parts of this document may be reproduced in any form without the express written permission
Pipeline Orchestration for Test Automation using Extended Buildbot Architecture
Pipeline Orchestration for Test Automation using Extended Buildbot Architecture Sushant G.Gaikwad Department of Computer Science and engineering, Walchand College of Engineering, Sangli, India. M.A.Shah
ATLAS job monitoring in the Dashboard Framework
ATLAS job monitoring in the Dashboard Framework J Andreeva 1, S Campana 1, E Karavakis 1, L Kokoszkiewicz 1, P Saiz 1, L Sargsyan 2, J Schovancova 3, D Tuckett 1 on behalf of the ATLAS Collaboration 1
Rosaria Rinaldi. Dipartimento di Matematica e Fisica «E. De Giorgi» Università del Salento. Scuola Superiore ISUFI Università del Salento
Rosaria Rinaldi Dipartimento di Matematica e Fisica «E. De Giorgi» Università del Salento Scuola Superiore ISUFI Università del Salento National Nanotechnology Laboratory Istituto di NanoScienze CNR Università
EDG Project: Database Management Services
EDG Project: Database Management Services Leanne Guy for the EDG Data Management Work Package EDG::WP2 [email protected] http://cern.ch/leanne 17 April 2002 DAI Workshop Presentation 1 Information in
A PRACTICAL APPROACH FOR A WORKFLOW MANAGEMENT SYSTEM
A PRACTICAL APPROACH FOR A WORKFLOW MANAGEMENT SYSTEM Simone Pellegrini, Francesco Giacomini, Antonia Ghiselli INFN Cnaf Viale Berti Pichat, 6/2-40127 Bologna, Italy [email protected] [email protected]
JOB DESCRIPTION APPLICATION LEAD
JOB DESCRIPTION APPLICATION LEAD The Application Lead will provide functional support and to expand capabilities in the area of systems configuration. This function provides the initial step in the process
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
SeaClouds Project D6.2 - Case Study test-beds and key features mapping
SeaClouds Project D6.2 - Case Study test-beds and key features mapping Project Acronym Project Title Call identifier Grant agreement no. 610531 Start Date 1 st October 2013 Ending Date 31 st March 2016
Comparison Document. SupportCenter Plus Comparison Documents 1
Comparison Document Your Customer Support Software evaluation is not complete until you check out the comparison between different features of. Here is a list prepared based on customer queries. Comparison
Green and Blue Infrastructures, Virtual, Cultural and Social Networks
9 INU Study Day of INU Italian National Institute of Urban Planning Green and Blue Infrastructures, Virtual, Cultural and Social Networks Naples, 18 December 2015 IV edition Inu Planning Literature Award
ARDA Experiment Dashboard
ARDA Experiment Dashboard Ricardo Rocha (ARDA CERN) on behalf of the Dashboard Team www.eu-egee.org egee INFSO-RI-508833 Outline Background Dashboard Framework VO Monitoring Applications Job Monitoring
OSG Operational Infrastructure
OSG Operational Infrastructure December 12, 2008 II Brazilian LHC Computing Workshop Rob Quick - Indiana University Open Science Grid Operations Coordinator Contents Introduction to the OSG Operations
MIT - Second Level Master s Degree in Innovation and Knowledge Transfer
MIT - Second Level Master s Degree in Innovation and Knowledge Transfer (art.3, par. 9 of D.M. 270/2004) Sponsored by Agenzia per la diffusione delle tecnologie per l innovazione (Agency for Dissemination
AN APPROACH TO DEVELOPING BUSINESS PROCESSES WITH WEB SERVICES IN GRID
AN APPROACH TO DEVELOPING BUSINESS PROCESSES WITH WEB SERVICES IN GRID R. D. Goranova 1, V. T. Dimitrov 2 Faculty of Mathematics and Informatics, University of Sofia S. Kliment Ohridski, 1164, Sofia, Bulgaria
Best Practices for Deploying and Managing Linux with Red Hat Network
Best Practices for Deploying and Managing Linux with Red Hat Network Abstract This technical whitepaper provides a best practices overview for companies deploying and managing their open source environment
SSDG Operational Manual Draft version: 0.1. Operational Manual For SSDG
Operational Manual For SSDG 1 Table of Contents ABBREVIATIONS... 5 SECTION 1: INTRODUCTION... 6 1 INTRODUCTION... 7 1.1 INTENDED USER... 7 1.2 HOW TO USE... 7 1.3 ORGANIZATION OF THE MANUAL... 8 1.4 HELPDESK...
SapphireIMS Service Desk Feature Specification
SapphireIMS Service Desk Feature Specification All rights reserved. COPYRIGHT NOTICE AND DISCLAIMER No parts of this document may be reproduced in any form without the express written permission of Tecknodreams
DESIGN OF A PLATFORM OF VIRTUAL SERVICE CONTAINERS FOR SERVICE ORIENTED CLOUD COMPUTING. Carlos de Alfonso Andrés García Vicente Hernández
DESIGN OF A PLATFORM OF VIRTUAL SERVICE CONTAINERS FOR SERVICE ORIENTED CLOUD COMPUTING Carlos de Alfonso Andrés García Vicente Hernández 2 INDEX Introduction Our approach Platform design Storage Security
Open Source Monitoring
Open Source Monitoring Icinga Team Munich Monitoring Workshop 06/13/2012 WWW.ICINGA.ORG Agenda! Introduction! Tools and Platform! Icinga vs. Nagios! Architecture! New in Icinga! Current Development! Live
Cisco TelePresence Manager
Cisco TelePresence Manager 1.3 Simplifying the Experience: Meeting Scheduling and Management Cisco TelePresence Manager is an integral part of the Cisco TelePresence experience that creates the feeling
ServiceDesk Plus On-Demand Comparison Document ServiceDesk Plus v8 vs ServiceDesk Plus On- Demand ENTERPRISE VERSIONS
ServiceDesk Plus On-Demand Comparison Document ServiceDesk Plus v8 vs ServiceDesk Plus On- Demand ENTERPRISE VERSIONS This document gives a feature wise comparison between the enterprise editions of ServiceDesk
Resource Management on Computational Grids
Univeristà Ca Foscari, Venezia http://www.dsi.unive.it Resource Management on Computational Grids Paolo Palmerini Dottorato di ricerca di Informatica (anno I, ciclo II) email: [email protected] 1/29
Your Help Desk evaluation is not complete until you check out the comparison between the different editions of ServiceDesk Plus and the price.
Comparison Document Your Help Desk evaluation is not complete until you check out the comparison between the different editions of ServiceDesk Plus and the price. Here is a list prepared based on customer
Tools and strategies to monitor the ATLAS online computing farm
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Tools and strategies to monitor the ATLAS online computing farm S. Ballestrero 1,2, F. Brasolin 3, G. L. Dârlea 1,4, I. Dumitru 4, D. A. Scannicchio 5, M. S. Twomey
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Network (RHN) Satellite server is an easy-to-use, advanced systems management platform
Welcome to the Customer Care HELPDESK SUPPORT. Short Guideline to get access to the INTESA HelpDesk Customer Care
Welcome to the Customer Care HELPDESK SUPPORT Short Guideline to get access to the INTESA HelpDesk Customer Care 1 INDEX Introduction How to reach our services: > by telephone > by E-mail > by Web Access
ITALY S EXTERNAL COMPETITIVENESS
MINISTERO DELL ECONOMIA E DELLE FINANZE DIPARTIMENTO DEL TESORO Direzione Analisi Economico-Finanziaria ITALY S EXTERNAL COMPETITIVENESS CONFERENCE AGENDA Rome, 24-25 November 2009 Ministero dell Economia
Database Services for Physics @ CERN
Database Services for Physics @ CERN Deployment and Monitoring Radovan Chytracek CERN IT Department Outline Database services for physics Status today How we do the services tomorrow? Performance tuning
IBM Endpoint Manager Version 9.1. Patch Management for Red Hat Enterprise Linux User's Guide
IBM Endpoint Manager Version 9.1 Patch Management for Red Hat Enterprise Linux User's Guide IBM Endpoint Manager Version 9.1 Patch Management for Red Hat Enterprise Linux User's Guide Note Before using
Curriculum Vitae et Studiorum
Curriculum Vitae et Studiorum Alberto Quattrini Li March 21, 2015 Contents Personal Data 1 Position and Education 1 Education................................................ 1 Academic Positions and Affiliations..................................
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
glibrary: Digital Asset Management System for the Grid
glibrary: Digital Asset Management System for the Grid Antonio Calanducci INFN Catania EGEE User Forum Manchester, 09 th -11 th May 2007 www.eu-egee.org EGEE and glite are registered trademarks Outline
Company & Solution Profile
Company & Solution Profile About Us NMSWorks Software Limited is an information technology company specializing in developing Carrier grade Integrated Network Management Solutions for the emerging convergent
