ATLAS computing operations within the GridKa Cloud

Size: px
Start display at page:

Download "ATLAS computing operations within the GridKa Cloud"

Transcription

1 Home Search Collections Journals About Contact us My IOPscience ATLAS computing operations within the GridKa Cloud This content has been downloaded from IOPscience. Please scroll down to see the full text J. Phys.: Conf. Ser ( View the table of contents for this issue, or go to the journal homepage for more Download details: IP Address: This content was downloaded on 07/07/2016 at 15:28 Please note that terms and conditions apply.

2 ATLAS Computing Operations within the GridKa Cloud 1 J.Kennedy 2, C.Serfon, G.Duckeck, R.Walker (LMU Munich), A.Olszewski (Institute of Nuclear Physics Krakow), S.Nderitu (The University of Bonn), and the ATLAS GridKa operations team. Abstract. The organisation and operations model of the ATLAS T1-T2 federation/cloud associated to the GridKa T1 in Karlsruhe is described. Attention is paid to Cloud level services and the experience gained during the last years of operation. The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROC s and is currently comprised of 13 core sites. A well defined and tested operations model in such a Cloud is of the utmost importance. We have defined the core Cloud services required by the ATLAS experiment and ensured that they are performed in a managed and sustainable manner. Services such as Distributed Data Management involving data replication,deletion and consistency checks, Monte Carlo Production, software installation and data reprocessing are described in greater detail. In addition to providing these central services we have undertaken several Cloud level stress tests and developed monitoring tools to aid with Cloud diagnostics. Furthermore we have defined good channels of communication between ATLAS, the T1 and the T2 s and have pro-active contributions from the T2 manpower. A brief introduction to the GridKa Cloud is provided followed by a more detailed discussion of the operations model and ATLAS services within the Cloud. 1. Introduction This document describes the ATLAS T1-T2 federation which is associated to the GridKa T1 in Karlsruhe Germany and the operational model which has been deployed to ensure that ATLAS services within this Cloud run smoothly. A brief introduction to the GridKa Cloud is provided followed by a more detailed discussion of the ATLAS services within the Cloud. 2. Sites and Infrastructure The ATLAS GridKa Cloud is formed from the GridKa T1 center at Karlsruhe in Germany and several associated T2 centers within Germany, Poland, the Czech Republic, Switzerland and Austria see figure 1. The GridKa T1 at FZK - DE A federated T2 from LMU/LRZ and MPI/RZG in Munich - DE A federated T2 from Wuppertal and Freiburg - DE A federated T2 between DESY Hamburg, DESY Zeuthen and Goettingen - DE 1 Throughout this paper the Term Cloud refers to a federation of resources 2 At present at the Rechenzentrum Garching of the Max Planck Society and the IPP c 2010 IOP Publishing Ltd 1

3 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP09) IOP Publishing FZU Prague - CZ CYF Krakow - PL CSCS Manno - CH Innsbruck - AU In addition to the T1 and T2 centers several T3 centers are also active. The Cloud is unusually diverse, covering several countries (DE,CH,CZ,PL,AU) as well as two ROC regions (DECH,CentralEurope). This diversity requires an extra effort to ensure good communication. Figure 1. A view of the GridKa Cloud with the data movement(fts) channels shown 3. Operational Approach and Organisation In our experience the development of an good operational and organisational model is the key to a successful Cloud. We have placed emphasis on forming groups to perform certain necessary tasks/services within the Cloud and on coordination both on a global and subgroup level. This section details the steps we have taken to provide organisation and develop the needed subgroups Cloud Coordinator During the initial phase of the Cloud setup a single person was identified to perform the central role of coordination in the Cloud. The role of the so called Cloud Coordinator was to provide technical and operational coordination for the Cloud during the initial Cloud startup. Some of the responsibilities of this role were: (i) Ensure good contact between the sites within the Cloud exists (ii) Maintain contact to ATLAS computing for both operational and development issues (iii) Have good understanding of the ATLAS services running within the Cloud, their dependencies,development and relation to each other (iv) Provide planning for tests, both internal within the Cloud and ATLAS wide (v) Organising meeting and keeping information flowing 2

4 As the Cloud evolved sub-coordination roles were filled in areas such as DDM/MC Production and the level of communication between these areas was enhanced. This sharing of responsibilities coupled with regular communication between the distinct areas lead to a reduced need for a single coordinator. After an initial setup period of 2-3 years the Cloud coordinator role became less essential as several groups now covered the responsibilities. The Cloud now consists of an organised group rather than a top down hierarchy Site deployment and Contacts The startup phase of the Cloud saw the deployment of glite middleware at the sites and the identification of site contacts. Each site (T1,T2) identified a list of people who act as site contacts. Ideally this contact list should contain an ATLAS aware person and a system administrator for the relevant site Identifying services The focus within the Cloud during the last months has changed from the initial site deployment and has become more operations centered. The major ATLAS level services/tasks which we believe should and must operate within our Cloud were defined with manpower being invested in each area. The main services areas are (i) Monte-Carlo Production (ii) Distributed Data Management (iii) ATLAS Software Installation (iv) Distributed Analysis (v) Data Reprocessing These areas are described in more detail in the Services (see section 4) Organisation The main organisational areas addressed are planning,communication and documentation. Planning: Functional tests both central ATLAS and internal within the Cloud need to be planned and executed. Communication: Ensure information flow between ATLAS and the working groups/sites within the Cloud. A mailing list has been set up and monthly video conferences are held. A weekly phone conference is held between the ATLAS technical contacts at GridKa and the Cloud coordinator. Documentation: A Cloud wiki page has been set up where the Cloud is described and information about functional tests etc is written T1 Contact The Tier 1 center at GridKa is of special importance to the whole Cloud since services it provides such as LFC and FTS are central to the Cloud. Good contact to the Tier1 Administration team and a high level of information exchange and planning is required to ensure smooth operations within the Cloud as a whole. ATLAS contacts are stationed at the T1 center at GridKa for several days each week where they attend the T1 middleware/services meetings both providing information from the experimental community and gaining information form the site administration. In this way we 3

5 Figure 2. Two views of the GridKa Monitoring web pages. High level views are provided which allow users to gain an impression of the health of the overall system thus allowing problems to be quickly identified. The ability to drill down to more informative low level views is available thus enabling the users to identify the root cause of the problem. ensure that a solid base for planning activities/tests is provided and also that problems are quickly identified and solved. To ensure a high level of service is maintained during the daily operations an extensive monitoring framework is provided by GridKa. The ATLAS T1 contact can use this monitoring information to identify problems as well as to gauge the impact of specific ATLAS activities on the GridKa system. 4. Services 4.1. Distributed Data Management (DDM) The Distributed Data Management system (DDM) is based on Grid software packages which automatically manage the data as well as providing a transparent and unique view of the connected resources. The ATLAS framework for DDM is called dq2. DDM operations are performed by two teams. One central team at CERN and one local Cloud team which manage DDM issues for GridKa and the associated Tier2s [1]. These two teams work in close collaboration and perform the following tasks : Analysis Data distribution: The local Cloud team helps define the distribution plan for data within the Cloud and additionally monitors its distribution. This requires liaison with the sites as well as central ATLAS DDM to ensure that data is replicated in accord with the computing model as well making intelligent use of the resources within the Cloud. T0 tests and T1-T2 functional test: Throughout the last years several scaling/functional tests have been performed to ensure that the data management within the Clouds is functional and well understood. Figures 3 and 4 shows the results of throughput tests in 2007 during which nominal rates to the T1 site and accompanying T2s were reached. Integrity check: Checks are performed on a regular basis to ensure that the files on the storage systems and the information registered in the LFC about these replicas is consistent. It has been observed that files may be registered in the LFC however be unavailable at a site and viceverse. The former results in job failure since the data is unavailable while the latter leads 4

6 Figure 3. Throughput (MB/s) to FZK during T0 tests from the 15 th of May to the 15 th of June. The nominal throughput is about 90 MB/s. Figure 4. Throughput (MB/s) to FZK tier2s during T0 tests (right). to a data leak (often referred to as dark data) where resources are used but the files are effectively lost to the grid. The GridKa ATLAS DDM team developed scripts which may be used to cure these problems and these are ran on a regular basis. Cleaning of datasets (in the event of data loss): In the event of data loss, failure of a storage pool of the accidental deletion of a file, actions must be taken to ensure the integrity of the meta data regarding the associated datasets. If files are lost completely, i.e. no other replica exists the LFC and central DDM catalogs require cleaning to remove the file entry. If on the other hand a replica exists this may simply be copied to the site which suffered the data loss. Datasets and FTS monitoring: The transfer of data to the sites is monitored using central DDM and local Cloud monitoring tools. Members of the DDM team ensure not only that data is being transferred to sites at the nominal rate but also that dataset completion is achieved. Figure 5 shows an example distribution of file arrival times after a dataset is subscribed to a site Software Installation The ATLAS software framework ATHENA evolves at a fast pace with new major versions of ATHENA being released regularly. Each new major version requires installation at all the TIER- 1 and TIER-2 sites within the GridKa Cloud and these installations are subsequently validated. This large scale deployment enables the MC production system, which needs the new major ATHENA versions, to use all of the available ATLAS computing power. New releases of the ATHENA software are prepared in so-called grid installation kits. These installation kits are run by the ATLAS installation team via grid-jobs at the specific site. The grid-job installs the corresponding ATHENA version and validates the installation. Each time a new version is released a centrally managed script starts an automatic installation at all known ATLAS TIER-1 and TIER-2 sites. If the automatic installation fails a member of the ATLAS installation team, normally the software manager assigned to the Cloud to which the site belongs, tries a manual installation. If the manual installation also fails the software manager contacts the responsible site administrator to better understand and solve the problem. In addition the installation team ensure that older, outdated, ATHENA releases are removed from sites and on occasion install minor ATHENA versions at sites when specific versions are requested by users. 5

7 trig1_misal1_mc t1_mcatnlo_jimmy.recon.aod.v _tid Number of files /03 10/03 17/03 24/03 Time Figure 5. Evolution of the number of files of a dataset in FZKDISK versus the time (from the subscription to the completion) 4.3. Monte-Carlo Production The ATLAS Monte-Carlo production is preformed by the production system team. A central database is maintained at cern which has so called tasks defined within it where a task is a collection of jobs of a particular type which are defined together. Tasks are assigned to a Cloud and the output data associated with a task is aggregated via the DDM system on the Tier1 of the assigned Cloud. The distribution of jobs to a Cloud has varied over the last years with the current model using pilot jobs which are submitted to the Cloud sites and subsequently gather a real job to the node once a jobs starts running. This method circumvents many problems which can occur in the job submission phase to sites and as such increases the efficiency of the production system. A central monitoring framework is provided and a shift team follows the progress of tasks raising trouble tickets when a task,site or Cloud experiences problems. Initially MC production was performed in a stress test manner in which a large amount of data was produced over a period of several months and the system was subsequently assessed and evaluated after the production run. However during the last years the production system has operated continuously. Within the Cloud two instances of the pilot submission servers, known as pilot factories, are supported. This ensures that the Cloud production team have a high level of influence over the job distribution within the Cloud and also that more information is available to them for debugging purposes. The number of jobs which may be run within the Cloud has increased dramatically and it is now possible to run over 7500 jobs simultaneously on the GridKa Cloud resources. Figure 6 shows the number of running production jobs for the period May May 2009 as a function of Cloud. 6

8 Figure 6. Monte Carlo production during the period May May 2009 as a function of Cloud. The GridKa(DE) Cloud is shown in light green ADCOS shifts Through the last years we have taken an increasingly active role in the production shifts group. The production shifts are designed to ensure that at any point in time a group of people are responsible for managing the production and tracking/reporting problems/bugs. The expertise in running production and identifying and treating problems within the grid has helped greatly during the last year. By aiding with the shifts and also helping to produce the tools required for the system we have helped improve the efficiency of our Cloud and ATLAS production as a whole Distributed Analysis As the LHC gears up towards turnon and data taking comes ever closer we have seen an increasing turn of attention towards distributed analysis. It is extremely important that a stable and simple user analysis framework be put in place to allow physics users the opportunity to exploit the grid resources while ensuring that these same resources are not adversely impacted by the chaotic analysis patterns which are expected. A great deal of effort has been placed into the development of user analysis tools and the evaluation of their usage on the Clouds resources. One of the user analysis tools, Ganga [2], has been used to extensively test the resources in several Clouds including the GridKa Cloud. Figure 7 shows the results of a test run of the so called Ganga Robot against the resources in the GridKa Cloud. Here a high level view of the jobs split by site and status is shown. The test suite however provides a more detailed level of information which allows fine grained analysis of the users analysis at individual sites. An example of this is shown in figure 8 where the cpu/walltime ratio is shown for a sample of analysis jobs which were ran at a site. The left plot shows a clear dominance of walltime thus indicating a problem when performing analysis at the site. Upon investigation it was found that a poor distribution of data on the site lead to a bottleneck when many jobs attempted to access this data. A redistribution of the data lead to the much improved cpu/walltime distribution as seen in the right plot. Through continued regular tests of the analysis framework the GridKa Cloud has become ever ready for the start of a more aggressive user analysis period which is expected once LHC data taking starts. 7

9 Figure 7. Ganga robot tests ran against the GridKa Cloud Figure 8. The CPU/Walltime ratio for distributed analysis test jobs before(left) and after(right) re-distribution of data at a site to enhance access rates. The analysis tests have also been folded into the SAM framework such that ATLAS specific tests are ran on a regular basis and sites can ensure that they are not only able to support the grid functionality but also the experimental requirements Cloud Monitoring Several central monitoring tools exist to monitor Data Management, MC Production, Site availability etc. These tools are well developed and provide a great deal of information. Despite the existence of these tools two monitoring projects were undertaken within the GridKa Cloud. Firstly a meta monitoring project was undertaken to gather information from several sources and provide a global overview of the status of the Cloud, as shown in Figure 9. Here a global overview can be easily gained however a large amount of detail is nevertheless contained within this single view. The Cloud monitoring page allows users to quickly identify possible problems and links are provided to the original monitoring sources thus allowing a more indepth analysis 8

10 Figure 9. The GridKa Cloud Monitoring web page. A wide range of information is gathered from several monitoring sources and presented in a single global view. of the problem to be undertaken. Secondly a monitoring project was undertaken on a lower level to allow the analysis of data transfers between centers. Data may be moved between the sites in several manners which have slightly differing dependencies. By performing and monitoring several data transfer methods we can (a) gain a better understanding of the data movement in the Cloud and (b) identify problems more precisely. For instance a situation in which the data transfer via the managed FTS system slows down while direct transfers remain constant would indicate a problem within the FTS system and not the site/network. Figure 10 shows data transfer monitoring, transfers can be seen from the T1 to a T2 site with several different transfer mechanisms being deployed Reprocessing A re-processing of the RAW ATLAS data will take place at regular intervals to allow the application of improved calibration and alignment data and improved algorithms. The RAW data is stored on the TAPE at the T1 site and as such a staging plan is needed to ensure that data is intelligently and quickly staged from tape for processing and later removed from the disk cache to allow further reprocessing to take place. The pre-staging task, involving massive recalls from tape, is inherently difficult. Several large scale tests have been performed to evaluate the system and identify problems. This requires good contact between the experimental and site working groups and an in depth understanding of both the physical system deployed at the T1 site and the experimental framework for reprocessing. Several recall tests have been performed with increasing levels of success with sustained recall rates of upto 190MB/s observed. Figure 11 shows the results from such a recall test. Numerous physics data sets were recalled from tape with an initially high rate being observed followed by a slowdown in which a large time interval passes until the final files are staged. 9

11 Figure 10. An example of the data transfer monitoring deployed in the GridKa Cloud. Transfers from the T1 to a T2 site are shown via several possible transfer mechanisms. Figure 11. Data retrieval from TAPE at FZK 5. Conclusion A successful operational model has evolved within the GridKa Cloud over several years. Each major aspect of the Cloud operations is covered by a working group and good channels of communication are established between the groups and the sites. Although we face many challenges as ATLAS data taking starts we are confident that we will see a well functioning and successful Cloud with every opportunity for great physics results. 6. Acknowledgements I would like to thank the many people who contributed to the GridKa Cloud operations for their time and effort, for making this venture work and making the last few years so enjoyable. I would like to specifically thank Gen Kawamura and the Goettingen group for their work on the Cloud monitoring. [1] Data Management tools and operational procedures in ATLAS Serfon, C et al., CHEP 09 proceedings [2] Distributed Analysis in ATLAS using GANGA Elmsheuser, J et al., CHEP 09 proceedings 10

John Kennedy LMU München DESY HH seminar 18/06/2007

John Kennedy LMU München DESY HH seminar 18/06/2007 ATLAS Data Management in the GridKa Cloud John Kennedy LMU München DESY HH seminar Overview Who am I Cloud Overview DDM Design DDM OPS in the DE Cloud Other issues Conclusion 2 Who Am I and what do I do

More information

ATLAS GridKa T1/T2 Status

ATLAS GridKa T1/T2 Status ATLAS GridKa T1/T2 Status GridKa TAB, FZK, 19 Oct 2007 München GridKa T1/T2 status Production and data management operations Computing team & cloud organization T1/T2 meeting summary Site monitoring/gangarobot

More information

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH

More information

Status and Evolution of ATLAS Workload Management System PanDA

Status and Evolution of ATLAS Workload Management System PanDA Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The

More information

The CMS analysis chain in a distributed environment

The CMS analysis chain in a distributed environment The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration

More information

Evolution of the ATLAS PanDA Production and Distributed Analysis System

Evolution of the ATLAS PanDA Production and Distributed Analysis System Evolution of the ATLAS PanDA Production and Distributed Analysis System T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, R. Walker 3, A. Stradling 2, V. Fine 1, M. Potekhin 1, S. Panitkin 1, G. Compostella

More information

Scalable stochastic tracing of distributed data management events

Scalable stochastic tracing of distributed data management events Scalable stochastic tracing of distributed data management events Mario Lassnig mario.lassnig@cern.ch ATLAS Data Processing CERN Physics Department Distributed and Parallel Systems University of Innsbruck

More information

ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC

ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC Wensheng Deng 1, Alexei Klimentov 1, Pavel Nevski 1, Jonas Strandberg 2, Junji Tojo 3, Alexandre Vaniachine 4, Rodney

More information

PoS(EGICF12-EMITC2)110

PoS(EGICF12-EMITC2)110 User-centric monitoring of the analysis and production activities within the ATLAS and CMS Virtual Organisations using the Experiment Dashboard system Julia Andreeva E-mail: Julia.Andreeva@cern.ch Mattia

More information

Dashboard applications to monitor experiment activities at sites

Dashboard applications to monitor experiment activities at sites Home Search Collections Journals About Contact us My IOPscience Dashboard applications to monitor experiment activities at sites This content has been downloaded from IOPscience. Please scroll down to

More information

Software, Computing and Analysis Models at CDF and D0

Software, Computing and Analysis Models at CDF and D0 Software, Computing and Analysis Models at CDF and D0 Donatella Lucchesi CDF experiment INFN-Padova Outline Introduction CDF and D0 Computing Model GRID Migration Summary III Workshop Italiano sulla fisica

More information

Werkzeuge zur verteilten Analyse im ATLAS-Experiment

Werkzeuge zur verteilten Analyse im ATLAS-Experiment Werkzeuge zur verteilten Analyse im ATLAS-Experiment Johannes Elmsheuser Ludwig-Maximilians-Universität München, Germany 04 März 2008/DPG Tagung Freiburg ATLAS Data replication and distribution Johannes

More information

DSA1.4 R EPORT ON IMPLEMENTATION OF MONITORING AND OPERATIONAL SUPPORT SYSTEM. Activity: SA1. Partner(s): EENet, NICPB. Lead Partner: EENet

DSA1.4 R EPORT ON IMPLEMENTATION OF MONITORING AND OPERATIONAL SUPPORT SYSTEM. Activity: SA1. Partner(s): EENet, NICPB. Lead Partner: EENet R EPORT ON IMPLEMENTATION OF MONITORING AND OPERATIONAL SUPPORT SYSTEM Document Filename: Activity: Partner(s): Lead Partner: Document classification: BG-DSA1.4-v1.0-Monitoring-operational-support-.doc

More information

ATLAS grid computing model and usage

ATLAS grid computing model and usage ATLAS grid computing model and usage RO-LCG workshop Magurele, 29th of November 2011 Sabine Crépé-Renaudin for the ATLAS FR Squad team ATLAS news Computing model : new needs, new possibilities : Adding

More information

ATLAS Software and Computing Week April 4-8, 2011 General News

ATLAS Software and Computing Week April 4-8, 2011 General News ATLAS Software and Computing Week April 4-8, 2011 General News Refactor requests for resources (originally requested in 2010) by expected running conditions (running in 2012 with shutdown in 2013) 20%

More information

Monitoring the Grid at local, national, and global levels

Monitoring the Grid at local, national, and global levels Home Search Collections Journals About Contact us My IOPscience Monitoring the Grid at local, national, and global levels This content has been downloaded from IOPscience. Please scroll down to see the

More information

ATLAS job monitoring in the Dashboard Framework

ATLAS job monitoring in the Dashboard Framework ATLAS job monitoring in the Dashboard Framework J Andreeva 1, S Campana 1, E Karavakis 1, L Kokoszkiewicz 1, P Saiz 1, L Sargsyan 2, J Schovancova 3, D Tuckett 1 on behalf of the ATLAS Collaboration 1

More information

No file left behind - monitoring transfer latencies in PhEDEx

No file left behind - monitoring transfer latencies in PhEDEx FERMILAB-CONF-12-825-CD International Conference on Computing in High Energy and Nuclear Physics 2012 (CHEP2012) IOP Publishing No file left behind - monitoring transfer latencies in PhEDEx T Chwalek a,

More information

A multi-dimensional view on information retrieval of CMS data

A multi-dimensional view on information retrieval of CMS data A multi-dimensional view on information retrieval of CMS data A. Dolgert, L. Gibbons, V. Kuznetsov, C. D. Jones, D. Riley Cornell University, Ithaca, NY 14853, USA E-mail: vkuznet@gmail.com Abstract. The

More information

CERN s Scientific Programme and the need for computing resources

CERN s Scientific Programme and the need for computing resources This document produced by Members of the Helix Nebula consortium is licensed under a Creative Commons Attribution 3.0 Unported License. Permissions beyond the scope of this license may be available at

More information

Client/Server Grid applications to manage complex workflows

Client/Server Grid applications to manage complex workflows Client/Server Grid applications to manage complex workflows Filippo Spiga* on behalf of CRAB development team * INFN Milano Bicocca (IT) Outline Science Gateways and Client/Server computing Client/server

More information

Tier-1 Services for Tier-2 Regional Centres

Tier-1 Services for Tier-2 Regional Centres Tier-1 Services for Tier-2 Regional Centres The LHC Computing MoU is currently being elaborated by a dedicated Task Force. This will cover at least the services that Tier-0 (T0) and Tier-1 centres (T1)

More information

An Accenture Point of View. Oracle Exalytics brings speed and unparalleled flexibility to business analytics

An Accenture Point of View. Oracle Exalytics brings speed and unparalleled flexibility to business analytics An Accenture Point of View Oracle Exalytics brings speed and unparalleled flexibility to business analytics Keep your competitive edge with analytics When it comes to working smarter, organizations that

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Home Search Collections Journals About Contact us My IOPscience Evolution of Database Replication Technologies for WLCG This content has been downloaded from IOPscience. Please scroll down to see the full

More information

Site specific monitoring of multiple information systems the HappyFace Project

Site specific monitoring of multiple information systems the HappyFace Project Home Search Collections Journals About Contact us My IOPscience Site specific monitoring of multiple information systems the HappyFace Project This content has been downloaded from IOPscience. Please scroll

More information

HappyFace for CMS Tier-1 local job monitoring

HappyFace for CMS Tier-1 local job monitoring HappyFace for CMS Tier-1 local job monitoring G. Quast, A. Scheurer, M. Zvada CMS Offline & Computing Week CERN, April 4 8, 2011 INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK, KIT 1 KIT University of the State

More information

Cloud Based Application Architectures using Smart Computing

Cloud Based Application Architectures using Smart Computing Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products

More information

Grid Computing in Aachen

Grid Computing in Aachen GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for

More information

RO-11-NIPNE, evolution, user support, site and software development. IFIN-HH, DFCTI, LHCb Romanian Team

RO-11-NIPNE, evolution, user support, site and software development. IFIN-HH, DFCTI, LHCb Romanian Team IFIN-HH, DFCTI, LHCb Romanian Team Short overview: The old RO-11-NIPNE site New requirements from the LHCb team User support ( solution offered). Data reprocessing 2012 facts Future plans The old RO-11-NIPNE

More information

Report from SARA/NIKHEF T1 and associated T2s

Report from SARA/NIKHEF T1 and associated T2s Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch

More information

Distributed Database Access in the LHC Computing Grid with CORAL

Distributed Database Access in the LHC Computing Grid with CORAL Distributed Database Access in the LHC Computing Grid with CORAL Dirk Duellmann, CERN IT on behalf of the CORAL team (R. Chytracek, D. Duellmann, G. Govi, I. Papadopoulos, Z. Xie) http://pool.cern.ch &

More information

Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF)

Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF) Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF) Gerardo Ganis CERN E-mail: Gerardo.Ganis@cern.ch CERN Institute of Informatics, University of Warsaw E-mail: Jan.Iwaszkiewicz@cern.ch

More information

DØ collaboration is larger and more international Most the collaborating institutions are remote Code development can occur at remote stations

DØ collaboration is larger and more international Most the collaborating institutions are remote Code development can occur at remote stations Remote Analysis Coordination Computing hardware is rather inexpensive CPU and storage media are inexpensive Small institutions can afford to own reasonable size computing facilities DØ collaboration is

More information

ITIL and Grid services at GridKa CHEP 2009, 21-27 March, Prague

ITIL and Grid services at GridKa CHEP 2009, 21-27 March, Prague ITIL Grid services at GridKa CHEP 2009, 21-27 March, Prague Tobias König, Dr. Holger Marten Steinbuch Centre for Computing (SCC) KIT The cooperation of Forschungszentrum Karlsruhe GmbH und Universität

More information

The WLCG Messaging Service and its Future

The WLCG Messaging Service and its Future The WLCG Messaging Service and its Future Lionel Cons, Massimo Paladin CERN - IT Department, 1211 Geneva 23, Switzerland E-mail: Lionel.Cons@cern.ch, Massimo.Paladin@cern.ch Abstract. Enterprise messaging

More information

Data Storage Accounting and Verification at LHC experiments

Data Storage Accounting and Verification at LHC experiments Home Search Collections Journals About Contact us My IOPscience Data Storage Accounting and Verification at LHC experiments This content has been downloaded from IOPscience. Please scroll down to see the

More information

The dcache Storage Element

The dcache Storage Element 16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG

More information

The CMS Tier0 goes Cloud and Grid for LHC Run 2. Dirk Hufnagel (FNAL) for CMS Computing

The CMS Tier0 goes Cloud and Grid for LHC Run 2. Dirk Hufnagel (FNAL) for CMS Computing The CMS Tier0 goes Cloud and Grid for LHC Run 2 Dirk Hufnagel (FNAL) for CMS Computing CHEP, 13.04.2015 Overview Changes for the Tier0 between Run 1 and Run 2 CERN Agile Infrastructure (in GlideInWMS)

More information

OSG PUBLIC STORAGE. Tanya Levshina

OSG PUBLIC STORAGE. Tanya Levshina PUBLIC STORAGE Tanya Levshina Motivations for Public Storage 2 data to use sites more easily LHC VOs have solved this problem (FTS, Phedex, LFC) Smaller VOs are still struggling with large data in a distributed

More information

(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015

(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 (Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:

More information

CMS Computing Model: Notes for a discussion with Super-B

CMS Computing Model: Notes for a discussion with Super-B CMS Computing Model: Notes for a discussion with Super-B Claudio Grandi [ CMS Tier-1 sites coordinator - INFN-Bologna ] Daniele Bonacorsi [ CMS Facilities Ops coordinator - University of Bologna ] 1 Outline

More information

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland Available on CMS information server CMS CR -2012/114 The Compact Muon Solenoid Experiment Conference Report Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland 23 May 2012 CMS Data Transfer operations

More information

The Economics of Cisco s nlight Multilayer Control Plane Architecture

The Economics of Cisco s nlight Multilayer Control Plane Architecture The Economics of Cisco s nlight Multilayer Control Plane Architecture Executive Summary Networks are becoming more difficult to plan and optimize because of high traffic growth, volatile traffic patterns,

More information

DSA1.5 U SER SUPPORT SYSTEM

DSA1.5 U SER SUPPORT SYSTEM DSA1.5 U SER SUPPORT SYSTEM H ELP- DESK SYSTEM IN PRODUCTION AND USED VIA WEB INTERFACE Document Filename: Activity: Partner(s): Lead Partner: Document classification: BG-DSA1.5-v1.0-User-support-system.doc

More information

Enhanced calibration High quality services from your global instrumentation partner

Enhanced calibration High quality services from your global instrumentation partner Products Solutions Services Enhanced calibration High quality services from your global instrumentation partner Services 2 Calibration One trusted advisor Get the best to meet all your critical application

More information

Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb

Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb Batch and Cloud overview Andrew McNab University of Manchester GridPP and LHCb Overview Assumptions Batch systems The Grid Pilot Frameworks DIRAC Virtual Machines Vac Vcycle Tier-2 Evolution Containers

More information

Scala Storage Scale-Out Clustered Storage White Paper

Scala Storage Scale-Out Clustered Storage White Paper White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current

More information

LHCb activities at PIC

LHCb activities at PIC CCRC08 post-mortem LHCb activities at PIC G. Merino PIC, 19/06/2008 LHCb Computing Main user analysis supported at CERN + 6Tier-1s Tier-2s essentially MonteCarlo production facilities 2 CCRC08: Planned

More information

Database Virtualization and the Cloud

Database Virtualization and the Cloud Database Virtualization and the Cloud How database virtualization, cloud computing and other advances will reshape the database landscape by Mike Hogan, CEO ScaleDB Inc. December 10, 2009 Introduction

More information

University Health Care System improves patient care with enterprise grid storage system

University Health Care System improves patient care with enterprise grid storage system Transforming IT University Health Care System improves patient care with enterprise grid storage system Overview University Health Care System is a notfor-profit community hospital network that serves

More information

Sustainable Grid User Support

Sustainable Grid User Support Sustainable Grid User Support Dr. Torsten Antoni torsten.antoni@kit.edu www.eu-egee.org EGEE and glite are registered trademarks User education User support is Simple access to a broad range of information

More information

First-year experience with the ATLAS online monitoring framework

First-year experience with the ATLAS online monitoring framework First-year experience with the ATLAS online monitoring framework A Corso-Radu 1 for the ATLAS TDAQ Collaboration 2 University of California, Department of Physics and Astronomy, 4129 Frederick Reines Hall,

More information

Hyper ISE. Performance Driven Storage. XIO Storage. January 2013

Hyper ISE. Performance Driven Storage. XIO Storage. January 2013 Hyper ISE Performance Driven Storage January 2013 XIO Storage October 2011 Table of Contents Hyper ISE: Performance-Driven Storage... 3 The Hyper ISE Advantage... 4 CADP: Combining SSD and HDD Technologies...

More information

ATLAS Data Management Accounting with Hadoop Pig and HBase

ATLAS Data Management Accounting with Hadoop Pig and HBase ATLAS Data Management Accounting with Hadoop Pig and HBase Mario Lassnig, Vincent Garonne, Gancho Dimitrov, Luca Canali, on behalf of the ATLAS Collaboration European Organization for Nuclear Research

More information

Performance Optimization Guide

Performance Optimization Guide Performance Optimization Guide Publication Date: July 06, 2016 Copyright Metalogix International GmbH, 2001-2016. All Rights Reserved. This software is protected by copyright law and international treaties.

More information

E-mail: guido.negri@cern.ch, shank@bu.edu, dario.barberis@cern.ch, kors.bos@cern.ch, alexei.klimentov@cern.ch, massimo.lamanna@cern.

E-mail: guido.negri@cern.ch, shank@bu.edu, dario.barberis@cern.ch, kors.bos@cern.ch, alexei.klimentov@cern.ch, massimo.lamanna@cern. *a, J. Shank b, D. Barberis c, K. Bos d, A. Klimentov e and M. Lamanna a a CERN Switzerland b Boston University c Università & INFN Genova d NIKHEF Amsterdam e BNL Brookhaven National Laboratories E-mail:

More information

How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router

How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router HyperQ Hybrid Flash Storage Made Easy White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com sales@parseclabs.com

More information

Performance Testing. on Production System

Performance Testing. on Production System Performance Testing on Production System Abstract Performance testing is conducted to check whether the target application will be able to meet the real users expectations in the production environment

More information

An Integrated CyberSecurity Approach for HEP Grids. Workshop Report. http://hpcrd.lbl.gov/hepcybersecurity/

An Integrated CyberSecurity Approach for HEP Grids. Workshop Report. http://hpcrd.lbl.gov/hepcybersecurity/ An Integrated CyberSecurity Approach for HEP Grids Workshop Report http://hpcrd.lbl.gov/hepcybersecurity/ 1. Introduction The CMS and ATLAS experiments at the Large Hadron Collider (LHC) being built at

More information

Das HappyFace Meta-Monitoring Framework

Das HappyFace Meta-Monitoring Framework Das HappyFace Meta-Monitoring Framework B. Berge, M. Heinrich, G. Quast, A. Scheurer, M. Zvada, DPG Frühjahrstagung Karlsruhe, 28. März 1. April 2011 KIT University of the State of Baden-Wuerttemberg and

More information

Customer Testimonial V1.03-1115

Customer Testimonial V1.03-1115 Customer Testimonial V1.03-1115 About Plantiflor Plantiflor has been providing high quality horticultural products direct to the customer for many years; growing from strength to strength to become one

More information

Patrick Fuhrmann. The DESY Storage Cloud

Patrick Fuhrmann. The DESY Storage Cloud The DESY Storage Cloud Patrick Fuhrmann The DESY Storage Cloud Hamburg, 2/3/2015 for the DESY CLOUD TEAM Content > Motivation > Preparation > Collaborations and publications > What do you get right now?

More information

Dcache Support and Strategy

Dcache Support and Strategy Helmholtz Alliance 2nd Grid Workshop HGF Mass Storage Support Group Christoph Anton Mitterer christoph.anton.mitterer@physik.uni-muenchen.de for the group Group members Filled positions Christopher Jung

More information

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Volker Büge 1, Marcel Kunze 2, OIiver Oberst 1,2, Günter Quast 1, Armin Scheurer 1 1) Institut für Experimentelle

More information

GETTING BACK ON TRACK IN RECORD TIME: OPTIMIZING A VISUAL ANALYTICS PROGRAM AND PROCESS

GETTING BACK ON TRACK IN RECORD TIME: OPTIMIZING A VISUAL ANALYTICS PROGRAM AND PROCESS GETTING BACK ON TRACK IN RECORD TIME: OPTIMIZING A VISUAL ANALYTICS PROGRAM AND PROCESS THE KEY QUESTION: How much can the power of visibility influence the decisionmaking process? The Background It s

More information

IFS-8000 V2.0 INFORMATION FUSION SYSTEM

IFS-8000 V2.0 INFORMATION FUSION SYSTEM IFS-8000 V2.0 INFORMATION FUSION SYSTEM IFS-8000 V2.0 Overview IFS-8000 v2.0 is a flexible, scalable and modular IT system to support the processes of aggregation of information from intercepts to intelligence

More information

Best Practices for Managing Virtualized Environments

Best Practices for Managing Virtualized Environments WHITE PAPER Introduction... 2 Reduce Tool and Process Sprawl... 2 Control Virtual Server Sprawl... 3 Effectively Manage Network Stress... 4 Reliably Deliver Application Services... 5 Comprehensively Manage

More information

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Executive Summary Oracle Berkeley DB is used in a wide variety of carrier-grade mobile infrastructure systems. Berkeley DB provides

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

ANALYSIS FUNCTIONAL AND STRESS TESTING

ANALYSIS FUNCTIONAL AND STRESS TESTING ANALYSIS FUNCTIONAL AND STRESS TESTING Dan van der Ster, CERN IT-ES-DAS for the HC team: Johannes Elmsheuser, Federica Legger, Mario Úbeda García WLCG Workshop, 8 July 2010 Outline Overview what should

More information

LHC Databases on the Grid: Achievements and Open Issues * A.V. Vaniachine. Argonne National Laboratory 9700 S Cass Ave, Argonne, IL, 60439, USA

LHC Databases on the Grid: Achievements and Open Issues * A.V. Vaniachine. Argonne National Laboratory 9700 S Cass Ave, Argonne, IL, 60439, USA ANL-HEP-CP-10-18 To appear in the Proceedings of the IV International Conference on Distributed computing and Gridtechnologies in science and education (Grid2010), JINR, Dubna, Russia, 28 June - 3 July,

More information

Mass Storage at GridKa

Mass Storage at GridKa Mass Storage at GridKa Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. Doris Ressmann http://www.gridka.de 1 Overview What is dcache? Pool

More information

White Paper April 2006

White Paper April 2006 White Paper April 2006 Table of Contents 1. Executive Summary...4 1.1 Scorecards...4 1.2 Alerts...4 1.3 Data Collection Agents...4 1.4 Self Tuning Caching System...4 2. Business Intelligence Model...5

More information

Using Peer to Peer Dynamic Querying in Grid Information Services

Using Peer to Peer Dynamic Querying in Grid Information Services Using Peer to Peer Dynamic Querying in Grid Information Services Domenico Talia and Paolo Trunfio DEIS University of Calabria HPC 2008 July 2, 2008 Cetraro, Italy Using P2P for Large scale Grid Information

More information

Save Time and Money with Quantum s Integrated Archiving Solution

Save Time and Money with Quantum s Integrated Archiving Solution Case Study Forum WHITEPAPER Save Time and Money with Quantum s Integrated Archiving Solution TABLE OF CONTENTS Summary of Findings...3 The Challenge: How to Cost Effectively Archive Data...4 The Solution:

More information

Integrated Rule-based Data Management System for Genome Sequencing Data

Integrated Rule-based Data Management System for Genome Sequencing Data Integrated Rule-based Data Management System for Genome Sequencing Data A Research Data Management (RDM) Green Shoots Pilots Project Report by Michael Mueller, Simon Burbidge, Steven Lawlor and Jorge Ferrer

More information

IBM Workload Automation: Major Improvements in Hybrid Cloud Workload Management, Predictive Analytics and User Experience

IBM Workload Automation: Major Improvements in Hybrid Cloud Workload Management, Predictive Analytics and User Experience Research Report IBM Workload Automation: Major Improvements in Hybrid Cloud Workload Management, Executive Summary Imagine the challenges involved in trying to manage application workflow across dozens

More information

Copyright www.agileload.com 1

Copyright www.agileload.com 1 Copyright www.agileload.com 1 INTRODUCTION Performance testing is a complex activity where dozens of factors contribute to its success and effective usage of all those factors is necessary to get the accurate

More information

Predictive Analytics for IT Giving Organizations an Edge in a Rapidly Changing World

Predictive Analytics for IT Giving Organizations an Edge in a Rapidly Changing World Predictive Analytics for IT Giving Organizations an Edge in a Rapidly Changing World EXECUTIVE SUMMARY By Dan Kusnetzky, Distinguished Analyst Organizations find themselves facing a complex mix of applications

More information

Comparison of Windows IaaS Environments

Comparison of Windows IaaS Environments Comparison of Windows IaaS Environments Comparison of Amazon Web Services, Expedient, Microsoft, and Rackspace Public Clouds January 5, 215 TABLE OF CONTENTS Executive Summary 2 vcpu Performance Summary

More information

Running a typical ROOT HEP analysis on Hadoop/MapReduce. Stefano Alberto Russo Michele Pinamonti Marina Cobal

Running a typical ROOT HEP analysis on Hadoop/MapReduce. Stefano Alberto Russo Michele Pinamonti Marina Cobal Running a typical ROOT HEP analysis on Hadoop/MapReduce Stefano Alberto Russo Michele Pinamonti Marina Cobal CHEP 2013 Amsterdam 14-18/10/2013 Topics The Hadoop/MapReduce model Hadoop and High Energy Physics

More information

Internal ROC DECH Report

Internal ROC DECH Report Internal ROC DECH Report Sven Hermann et.al. Karlsruhe Institute of Technology www.eu-egee.org EGEE-III INFSO-RI-222667 EGEE and glite are registered trademarks DECH Region U Dortmund DESY DESY Zeuthen

More information

Knowledge Management and Semantics in Global Grid User Support

Knowledge Management and Semantics in Global Grid User Support Knowledge Management and Semantics in Global Grid User Support T. Antoni (1), W. Bühler (1), H. Dres (1), G. Grein (1), T.Kamps (2), R. Kupsch (1), M. Roth (1), R.Stenzel (2) (1) Institute for Scientific

More information

Computing at the HL-LHC

Computing at the HL-LHC Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,

More information

Improved metrics collection and correlation for the CERN cloud storage test framework

Improved metrics collection and correlation for the CERN cloud storage test framework Improved metrics collection and correlation for the CERN cloud storage test framework September 2013 Author: Carolina Lindqvist Supervisors: Maitane Zotes Seppo Heikkila CERN openlab Summer Student Report

More information

Improvement Options for LHC Mass Storage and Data Management

Improvement Options for LHC Mass Storage and Data Management Improvement Options for LHC Mass Storage and Data Management Dirk Düllmann HEPIX spring meeting @ CERN, 7 May 2008 Outline DM architecture discussions in IT Data Management group Medium to long term data

More information

A quantitative comparison between xen and kvm

A quantitative comparison between xen and kvm Home Search Collections Journals About Contact us My IOPscience A quantitative comparison between xen and kvm This content has been downloaded from IOPscience. Please scroll down to see the full text.

More information

Increasing Search Rankings with Search Engine Optimization

Increasing Search Rankings with Search Engine Optimization Increasing Search Rankings with Search Engine Optimization Search Engine Optimization for Business-to- Consumer Services A Case Study by: Evolve Systems Donald Raleigh President, Evolve Systems March 2011

More information

Archiving File Data with Snap Enterprise Data Replicator (Snap EDR): Technical Overview

Archiving File Data with Snap Enterprise Data Replicator (Snap EDR): Technical Overview Archiving File Data with Snap Enterprise Data Replicator (Snap EDR): Technical Overview Contents 1. Abstract...1 2. Introduction to Snap EDR...1 2.1. Product Architecture...2 3. System Setup and Software

More information

HyperQ Remote Office White Paper

HyperQ Remote Office White Paper HyperQ Remote Office White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com sales@parseclabs.com Introduction

More information

An Implementation of Active Data Technology

An Implementation of Active Data Technology White Paper by: Mario Morfin, PhD Terri Chu, MEng Stephen Chen, PhD Robby Burko, PhD Riad Hartani, PhD An Implementation of Active Data Technology October 2015 In this paper, we build the rationale for

More information

Web Load Stress Testing

Web Load Stress Testing Web Load Stress Testing Overview A Web load stress test is a diagnostic tool that helps predict how a website will respond to various traffic levels. This test can answer critical questions such as: How

More information

Inside Track Research Note. In association with. Key Advances in Storage Technology. Overview of new solutions and where they are being used

Inside Track Research Note. In association with. Key Advances in Storage Technology. Overview of new solutions and where they are being used Research Note In association with Key Advances in Storage Technology Overview of new solutions and where they are being used July 2015 In a nutshell About this The insights presented in this document are

More information

Resource control in ATLAS distributed data management: Rucio Accounting and Quotas

Resource control in ATLAS distributed data management: Rucio Accounting and Quotas Resource control in ATLAS distributed data management: Rucio Accounting and Quotas Martin Barisits On behalf of the ATLAS Collaboration CERN PH-ADP, Geneva, Switzerland 13. April 2015 Martin Barisits CHEP

More information

DSS. Data & Storage Services. Cloud storage performance and first experience from prototype services at CERN

DSS. Data & Storage Services. Cloud storage performance and first experience from prototype services at CERN Data & Storage Cloud storage performance and first experience from prototype services at CERN Maitane Zotes Resines, Seppo S. Heikkila, Dirk Duellmann, Geoffray Adde, Rainer Toebbicke, CERN James Hughes,

More information

Provisioning Technology for Automation

Provisioning Technology for Automation Provisioning Technology for Automation V Mamoru Yokoyama V Hiroshi Yazawa (Manuscript received January 17, 2007) Vendors have recently been offering more products and solutions for IT system automation

More information

Integrated Billing Solutions with HP CSA 4.00

Integrated Billing Solutions with HP CSA 4.00 Technical white paper Integrated Billing Solutions with HP CSA 4.00 Table of Contents Introduction... 2 Part 1. HP CSA Concepts... 2 Part 2. Billable Service Conditions... 4 Part 3. Billable Intervals...

More information

Inside Track Research Note. In association with. Storage Quality of Service Management. The automation imperative

Inside Track Research Note. In association with. Storage Quality of Service Management. The automation imperative Research Note In association with Storage Quality of Service Management The automation imperative May 2015 In a nutshell About this The insights presented in this document are derived from independent

More information

CMS Software Deployment on OSG

CMS Software Deployment on OSG CMS Software Deployment on OSG Bockjoo Kim 1, Michael Thomas 2, Paul Avery 1, Frank Wuerthwein 3 1. University of Florida, Gainesville, FL 32611, USA 2. California Institute of Technology, Pasadena, CA

More information