EGEE is a project funded by the European Union under contract IST
|
|
|
- Ruth Booker
- 10 years ago
- Views:
Transcription
1 NA4 Applications F.Harris(Oxford/CERN) NA4/HEP coordinator EGEE is a project funded by the European Union under contract IST
2 Talk Outline The basic goals of NA4 The organisation The participants and their roles The flavour of the work for the NA4 sub-groups Biomed HEP Generic applications Testing Industry Forum Milestones and deliverables Relations with other EGEE activities Concluding comments EGEE Induction May
3 NA4: Identification and support of early-user and established applications on the EGEE infrastructure To identify through the dissemination partners and a well defined integration process a portfolio of early user applications from a broad range of application sectors from academia, industry and commerce. To support development and production use of all of these applications on the EGEE infrastructure and thereby establish a strong user base on which to build a broad EGEE user community. To initially focus on two well-defined application areas Particle Physics and Life sciences, while developing a process for supporting other application areas EGEE Induction May
4 NA4 organisational structure EGEE NA3 NA4 NA3 liaison HEP applications Biomedical applications Generic applications Test team Industry forum Test suites Web site, deliverables, internal notes Application specific softwares Grid interfaces Meetings, reports EGEE Induction May
5 NA4 technical organization LCG EGEE PEB NA4 AWG (V. Breton) Test team R. Météry Eric Fede HEP F. Harris M. Lamanna Biomed J. Montagnat C. Blanchet Generic R. Barbera ARDA Data challenges Biomed technical team Generic technical team EGEE Induction May
6 Roles and staffing Federation Role FTE Funded FTE Unfunded CERN HEP Applications (coord.) 4 4 (9) UK+Ireland NA3 Liaison 0,5 0,5 Italy Generic app (coord) 2 2 France General coord., BioMed, Test team, Industry diss. 7 7 Northern Europe Generic applications 1 1 Germany + Switzerland Generic applications 1 1 Central Europe Generic applications 1 1 South West Europe BioMed 2 2 Russia HEP, BioMed 3 3 Totals 21,5 21,5 EGEE Induction May
7 What do all applications expect from the grid? Access to a world-wide virtual computing laboratory with vast resources Possibility to organise in VOs(virtual organisations) with members being given access rights according to their roles in the VO, and facilities to protect data Transparency in data and job management via easy to use application interfaces Definite added value in performance for both interactive and batch computation EGEE Induction May
8 Biomedical Applications Some key applications and their characteristics Bioinformatics: gene/proteome databases distributions Medical applications (screening, epidemiology...): image databases distribution Parallel algorithms for medical image processing, simulation, etc Interactive application (human supervision or simulation) Security/privacy constraints Heterogeneous data formats (genomics, proteomics, image formats) Frequent data updates Complex data sets (medical records) Long term archiving requirements EGEE Induction May
9 BLAST comparing DNA or protein sequences BLAST is the first step for analysing new sequences: to compare DNA or protein sequences to other ones stored in personal or public databases. Ideal as a grid application. Requires resources to store databases and run algorithms Can compare one or several sequence against a database in parallel Large user community EGEE Induction May
10 EGEE Induction May BLAST gridification DB BLAST Seq1 > dcscdsc bscdsvbfvbvfbvbvbhvbhs vbhdvbhfdbvfd Seq2 > bvdfvfdvhbdfvb bhvdsvbhvbhdvrefghefgdscgdfg csdycgdkcsqkc Seqn > bvdfvfdvhbdfvb bhvdsvbhvbhdvrefghefgdscgdfg csdycgdkcsqkchdsqhfduhdhdhq edezhhezldhezhfehflezfzejfv DB BLAST DB BLAST DB BLAST dedzedz dzedezd zecdscsd cscdssdc sdcdscbs cdsbcbjb f dedzedz dzedezd zecdscsd cscdssdc sdcdscbs cdsbcbjb f dedzedz dzedezd zecdscsd cscdssdc sdcdscbs cdsbcbjb f dedzedz dzedezd zecdscsd cscdssdc sdcdscbs cdsbcbjb f dedzedz dzedezd zecdscsd cscdssdc sdcdscbs cdsbcbjb f dedzedz dzedezd zecdscsd cscdssdc sdcdscbs cdsbcbjb f dedzedz dzedezd zecdscsd cscdssdc sdcdscbs cdsbcbjb f dedzedz dzedezd zecdscsd cscdssdc sdcdscbs cdsbcbjb f dedzedz dzedezd zecdscsd cscdssdc sdcdscbs cdsbcbjb f dedzedz dzedezd zecdscsd cscdssdc sdcdscbs cdsbcbjb f dedzedz dzedezd zecdscsd cscdssdc sdcdscbs cdsbcbjb f dedzedz dzedezd zecdscsd cscdssdc sdcdscbs cdsbcbjb f RESULT dssdcsd cvbfvbvfbvbvbhvbh svbhdvbhfdbvfdbvdfvfdvhbdfvbhd bhvdsvbhvbhdvrefghefgdscgdfgcsd ycgdkcsqkcqhdsqhfduhdhdhqedezh dhezldhezhfehflezfzeflehfhezfhehfe zhflezhflhfhfelhfehflzlhfzdjazslzdh fhfdfezhfehfizhflqfhduhsdslchlkch udcscscdscdscdscsddzdzeqvnvqvnq! Vqlvkndlkvnldwdfbwdfbdbd wdfbfbndblnblkdnblkdbdfbwfdbfn UI Computing element Input file Computing element Seq1 > bcbjbdfn dfjvbndfbnbnfbjn bjxbnxbjk:nxbf Seq2 > bcbjbdfn dfjvbndfbnbnfbjn bjxbnxbjk:nxbf Seqn > bcbjbdfn dfjvbndfbnbnfbjn bjxbnxbjk:nxbf
11 Monte carlo simulation for radiotherapy planning Retrieving Submission Copy the of of medical root output jdls to image the from files from CEs the SE CEs to the CE GATE Computing Element Storage Element CCIN2P3 GATE Scanner slices: DICOM format GATE Computing Element Storage Element RAL Concatenation Image: text file Binary file: Image.raw Size 19M Anonymisation User interface Databa se GATE Storage Computing Element Element NIKHEF MARSEILLE Storage Computing Element Element EGEE Induction May
12 HEP applications and the grid Have been running large distributed computing systems for many years Now the focus for the future is on computing for LHC and hence we have the LCG (LHC computing grid project) In addition to the 4 LHC experiments(atlas,alice,cms,lhcb) other current HEP experiments use grid technology e.g. Babar,CDF,D0.., and don t forget Theory and other new HEP experiments.. LHC experiments are currently executing large scale data challenges(dcs) involving thousands of processors world-wide and generating many Terabytes of data Moving to so-called chaotic use of grid with individual user analysis (thousands of users operating within experiment VOs)..see ARDA EGEE Induction May
13 LHC Experiments ATLAS CMS Storage Raw recording rate GByte/s ALICE Accumulating at 5-8 PetaByte/year LHCb 10 PetaByte of disk Processing 200,000 of today s fastest PCs EGEE Induction May
14 LHC Computing Grid Project - a Collaboration Building and operating the LHC Grid a collaboration between The physicists and computing specialists from the LHC experiment The projects in Europe and the US that have been developing Grid middleware The regional and national computing centres that provide resources for LHC The research networks Researchers Software Engineers Service Providers EGEE Induction May
15 desktops portables Tier-2 small centres Tier-1 MSU RAL IC IN2P3 IFCA UB LHC Computing Model (simplified!!) Tier-0 the accelerator centre Cambridge Filter raw data Reconstruction summary data (ESD) Record raw data and ESD Distribute raw and ESD to Tier-1 FNAL Budapest CNAF Prague FZK Taipei PIC TRIUMF Tier-1 Permanent storage and management of raw, ESD, calibration data, metadata, analysis data and databases grid-enabled data service Data-heavy analysis Re-processing raw ESD National, regional support online to the data acquisition process high availability, long-term commitment managed mass storage Legnaro BNL ICEPP CSCS Rome CIEMAT Tier-2 Krakow NIKHEF USC Well-managed disk storage grid-enabled Simulation End-user analysis batch and interactive High performance parallel analysis (PROOF) EGEE Induction May
16 MSU RAL IC IN2P3 IFCA UB Current estimates of Computing Resources Cambridge FNAL needed at Major LHC CentresBudapest First full year of data CNAF Prague Taipei Processing M SI2000** Disk PetaBytes Mass TRIUMF Storage PetaBytes Legnaro FZK Data distribution ~70 Gbits/sec PIC BNL ICEPP CSCS CERN Rome CIEMAT Major data handling centres (Tier 1) Other large centres (Tier 2) Totals Krakow NIKHEF USC ** Current fast processor ~1K SI2000 EGEE Induction May
17 Characteristics of CMS Data Challenge DC04 (just completed) run with LCG-2 and CMS resources world-wide ( US Grid3 was a major component) Pre-Challenge Production (Phase 1) simulation generation and digitisation After 8 months of continuous running: 750,000 jobs 3,500 KSI2000 months 700,000 files 80 TB of data Data Challenge (Phase 2) Ran the full data reconstruction and distribution chain at 25 Hz Achieved 2,200 jobs/day (about 500 CPU s) running at Tier-0 Total 45,000 jobs Tier-0 and files/s registered to RLS (with POOL metadata) Total 570,000 files registered to RLS 4 MB/s produced and distributed to each Tier-1 EGEE Induction May
18 ARDA (Architectural Roadmap for Distributed Analysis) ALICE Distr. Analysis CMS Distr. Analysis ATLAS Distr. Analysis LHCb Distr. Analysis GAE PROOF SEAL POOL ARDA Collaboration Coordination Integration Specification Priorities Planning Experience Use Cases EGEE NA4 Application identification and support LCG-GAG Grid Application Group Resource Providers Community EGEE middleware EGEE Induction May
19 NA4 Generic Applications This is a key activity in the process of getting new scientific and industrial communities interested and committed to use the continental grid infrastructure built by the EGEE Project. GENIUS is a well established tool which will be fundamental in the process of interfacing new applications with the EGEE middleware hiding its complex internals to non-experts users from new communities. GILDA is a complete suite of grid elements (test-bed, CA, VO, monitoring system, web portal) and applications fully dedicated to dissemination purposes. This could also represent the ideal grid testbed where to start the porting of new generic applications. GILDA is the dissemination tool which will be used by NA3 during courses and tutorials so the important aspect of induction of the grid paradigm to new communities is also covered. It is now important to have the first meeting of the EGAAP board and define the first Generic Applications to be interfaced. EGEE Induction May
20 The GILDA home page ( EGEE Induction May
21 The Generic Application questionnaire (contribution to MNA4.1) Questionnaire to get information and first requirements from new communities interested in using the EGEE Infrastructure ( Feed-backs received so far ( Astrophysics (EVO and Planck satellite) Earth Observation (ozone maps, seismology, climate) Digital Libraries (DILIGENT Project) Grid Search Engines (GRACE Project) Industrial applications (SIMDAT Project) Interest also from Computational Chemistry (Italy and Czech Republic), Civil Engineering (Spain), and Geophysics (Switzerland and France) communities EGEE Induction May
22 EGEE Industry Forum Objectives The main role of the Industry Forum in the Enabling Grids for E-Science in Europe (EGEE) project is to raise awareness of the project amongst industry and to encourage businesses to participate in the project. This will be achieved by making direct contact with industry, liaising between the project partners and industry in order to ensure businesses get what they need in terms of standards and functionality and ensuring that the project benefits from the practical experience of businesses. The members of the EGEE Industry Forum are companies of all sizes who have their core business or a part of their business within Europe. The Industry Forum will be managed by a steering group consisting of EGEE project partners and representatives from business EGEE Induction May
23 NA4 Testing Group Three types of tests will be developed: (based on user requirements and on the experience gathered by the ongoing activities like LHC DCs and ARDA prototyping) Tests of service availability: This set of tests will check the EGEE services availability. All the services providing by the GRID should be tested : Job submission and management, files management, Information service,... Tests of functionality: To verify that the functionalities required are available, usable and complete; for example, file creation, moving and deletion, information publication, errors recovery. Tests to measure performances: Their goal is to characterize the testbed from the end users/application perspective. Part of them will be time measurements ( time to submit X job, time to replicatey files,...), others will address scalability measurements ( how many jobs can be accepted by service Z, files limits size,...) while others will be more abstract (information availability, errors message access,...). This work should be done in close collaboration with ARDA, JRA1 and SA1 EGEE Induction May
24 Milestones for NA4 applications First applications migrated to the EGEE infrastructure MNA4.1 M6 HEP data challenges for 4 LHC experiments and for D0 Biomedicine GATE simulation in nuclear medicine + others Plus the first generic applications MNA4.2 M12 First external review of Applications Identification and Support with feedback MNA4.3 M24 Second external review of Applications Identification and Support with feedback EGEE Induction May
25 NA4 relations with other EGEE activities and other bodies (1) SA1 grid operations How to get new VOs onto LCG from different domains? How to integrate new resources(sites) into LCG coming from different application areas? Rationalistion of test procedures Working with national agencies (e.g. GridPP Application monitoring) NA3 training Estimating requirements for courses Design and implementation of courses JRA1 middleware All applications input requirements and monitor their satisfaction with feedback to middleware (process goes through the PTF-Project Technical Forum) JRA2 quality assurance NA4 have a representative on this group to define process for monitoring quality of EGEE services EGEE Induction May
26 NA4 relations with other EGEE activities and other bodies (2) JRA3 security Security of medical (and other application) data Security for sites SA2,JRA4 networking Global HEP requirements through LCG Biomed and other applications must similrly give global needs NA4 will give individual application use cases especially where problems have been encountered LCG NA4/HEP are presented on the LCG/GAG(grid Applications Group) This is HEP source of requirements and giving feedback to middleware on customer satisfaction. Some GAG people are on the PTF. EGEE Induction May
27 Conclusions NA4 is up and running now HEP is using LCG-2 for data challenges ARDA is well under way and waiting for first new middleware prototype Biomedicine has applications ready to go onto LCG-2 and pre-production services Generic group is very active with GILDA and excellent relations with NA3 Testing group has active dialogue with JRA1 and ARDA for rationalising testing effort Industry forum has developed links with several companies (see EGEE Cork presentations) NA4 open meeting Jul at Catania with emphasis on inter-activity dialogue (with middleware,operations,security,networking) NA4 Web site EGEE Induction May
GridKa: Roles and Status
GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
Linux and the Higgs Particle
Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004 Outline What is CERN The Physics The Physics Tools The Accelerator The
(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015
(Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:
CMS Dashboard of Grid Activity
Enabling Grids for E-sciencE CMS Dashboard of Grid Activity Julia Andreeva, Juha Herrala, CERN LCG ARDA Project, EGEE NA4 EGEE User Forum Geneva, Switzerland March 1-3, 2006 http://arda.cern.ch ARDA and
The Grid-it: the Italian Grid Production infrastructure
n 1 Maria Cristina Vistoli INFN CNAF, Bologna Italy The Grid-it: the Italian Grid Production infrastructure INFN-Grid goals!promote computational grid technologies research & development: Middleware and
Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS
Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Forschungszentrum Karlsruhe GmbH Institute for Scientific omputing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten (for the GridKa and GGUS teams)
Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
Grid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
Bob Jones Technical Director [email protected]
Bob Jones Technical Director [email protected] CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST-2003-508833 EGEE Goal & Strategy Goal: Create a wide
Digital libraries of the future and the role of libraries
Digital libraries of the future and the role of libraries Donatella Castelli ISTI-CNR, Pisa, Italy Abstract Purpose: To introduce the digital libraries of the future, their enabling technologies and their
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
Tier0 plans and security and backup policy proposals
Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle
Service Challenge Tests of the LCG Grid
Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources
LHC schedule: what does it imply for SRM deployment? [email protected]. CERN, July 2007
WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? [email protected] WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
The GENIUS Grid Portal
The GENIUS Grid Portal (*) work in collaboration with A. Falzone and A. Rodolico EGEE NA4 Workshop, Paris, 18.12.2003 CHEP 2000, 10.02.2000 Outline Introduction Grid portal architecture and requirements
DSA1.5 U SER SUPPORT SYSTEM
DSA1.5 U SER SUPPORT SYSTEM H ELP- DESK SYSTEM IN PRODUCTION AND USED VIA WEB INTERFACE Document Filename: Activity: Partner(s): Lead Partner: Document classification: BG-DSA1.5-v1.0-User-support-system.doc
Global Grid User Support - GGUS - in the LCG & EGEE environment
Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni ([email protected]) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New
Global Grid User Support - GGUS - start up schedule
Global Grid User Support - GGUS - start up schedule GDB Meeting 2004-07 07-13 Concept Target: 24 7 support via time difference and 3 support teams Currently: GGUS FZK GGUS ASCC Planned: GGUS USA Support
Evolution of Database Replication Technologies for WLCG
Home Search Collections Journals About Contact us My IOPscience Evolution of Database Replication Technologies for WLCG This content has been downloaded from IOPscience. Please scroll down to see the full
ARDA Experiment Dashboard
ARDA Experiment Dashboard Ricardo Rocha (ARDA CERN) on behalf of the Dashboard Team www.eu-egee.org egee INFSO-RI-508833 Outline Background Dashboard Framework VO Monitoring Applications Job Monitoring
Data storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
ATLAS job monitoring in the Dashboard Framework
ATLAS job monitoring in the Dashboard Framework J Andreeva 1, S Campana 1, E Karavakis 1, L Kokoszkiewicz 1, P Saiz 1, L Sargsyan 2, J Schovancova 3, D Tuckett 1 on behalf of the ATLAS Collaboration 1
The LSST Data management and French computing activities. Dominique Fouchez on behalf of the IN2P3 Computing Team. LSST France April 8th,2015
The LSST Data management and French computing activities Dominique Fouchez on behalf of the IN2P3 Computing Team LSST France April 8th,2015 OSG All Hands SLAC April 7-9, 2014 1 The LSST Data management
PoS(EGICF12-EMITC2)110
User-centric monitoring of the analysis and production activities within the ATLAS and CMS Virtual Organisations using the Experiment Dashboard system Julia Andreeva E-mail: [email protected] Mattia
ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC
ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC Wensheng Deng 1, Alexei Klimentov 1, Pavel Nevski 1, Jonas Strandberg 2, Junji Tojo 3, Alexandre Vaniachine 4, Rodney
The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland
The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which
irods at CC-IN2P3: managing petabytes of data
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules irods at CC-IN2P3: managing petabytes of data Jean-Yves Nief Pascal Calvat Yonny Cardenas Quentin Le Boulc h
Data And Software Preservation for Open Science (DASPOS)
Data And Software Preservation for Open Science (DASPOS) For the past few years, the worldwide High Energy Physics (HEP) Community has been developing the background principles and foundations for a community-wide
Status and Evolution of ATLAS Workload Management System PanDA
Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment
DAME Astrophysical DAta Mining Mining & & Exploration Exploration GRID
DAME Astrophysical DAta Mining & Exploration on GRID M. Brescia S. G. Djorgovski G. Longo & DAME Working Group Istituto Nazionale di Astrofisica Astronomical Observatory of Capodimonte, Napoli Department
Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer. Frank Würthwein Rick Wagner August 5th, 2013
Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer Frank Würthwein Rick Wagner August 5th, 2013 The Universe is a strange place! 67% of energy is dark energy We got no
Using S3 cloud storage with ROOT and CernVMFS. Maria Arsuaga-Rios Seppo Heikkila Dirk Duellmann Rene Meusel Jakob Blomer Ben Couturier
Using S3 cloud storage with ROOT and CernVMFS Maria Arsuaga-Rios Seppo Heikkila Dirk Duellmann Rene Meusel Jakob Blomer Ben Couturier INDEX Huawei cloud storages at CERN Old vs. new Huawei UDS comparative
EDG Project: Database Management Services
EDG Project: Database Management Services Leanne Guy for the EDG Data Management Work Package EDG::WP2 [email protected] http://cern.ch/leanne 17 April 2002 DAI Workshop Presentation 1 Information in
The LHCb Software and Computing NSS/IEEE workshop Ph. Charpentier, CERN
The LHCb Software and Computing NSS/IEEE workshop Ph. Charpentier, CERN QuickTime et un décompresseur TIFF (LZW) sont requis pour visionner cette image. 011010011101 10101000101 01010110100 B00le Outline
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version The Particle Physics Experiment Consolidated Grant proposals now being submitted
Building a Volunteer Cloud
Building a Volunteer Cloud Ben Segal, Predrag Buncic, David Garcia Quintas / CERN Daniel Lombrana Gonzalez / University of Extremadura Artem Harutyunyan / Yerevan Physics Institute Jarno Rantala / Tampere
An Integrated CyberSecurity Approach for HEP Grids. Workshop Report. http://hpcrd.lbl.gov/hepcybersecurity/
An Integrated CyberSecurity Approach for HEP Grids Workshop Report http://hpcrd.lbl.gov/hepcybersecurity/ 1. Introduction The CMS and ATLAS experiments at the Large Hadron Collider (LHC) being built at
Introduction to grid technologies, parallel and cloud computing. Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber
Introduction to grid technologies, parallel and cloud computing Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber OUTLINES Grid Computing Parallel programming technologies (MPI- Open MP-Cuda )
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,
Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb
Batch and Cloud overview Andrew McNab University of Manchester GridPP and LHCb Overview Assumptions Batch systems The Grid Pilot Frameworks DIRAC Virtual Machines Vac Vcycle Tier-2 Evolution Containers
Using the Grid for the interactive workflow management in biomedicine. Andrea Schenone BIOLAB DIST University of Genova
Using the Grid for the interactive workflow management in biomedicine Andrea Schenone BIOLAB DIST University of Genova overview background requirements solution case study results background A multilevel
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
LHC GRID computing in Poland
POLAND LHC GRID computing in Poland Michał Turała IFJ PAN/ ACK Cyfronet AGH, Kraków Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa, 25.10.2007 21.04.2008 1 Outline Computing needs
Performance Monitoring of the Software Frameworks for LHC Experiments
Proceedings of the First EELA-2 Conference R. mayo et al. (Eds.) CIEMAT 2009 2009 The authors. All rights reserved Performance Monitoring of the Software Frameworks for LHC Experiments William A. Romero
Bulletin. Introduction. Dates and Venue. History. Important Dates. Registration
Bulletin Introduction The International Conference on Computing in High Energy and Nuclear Physics (CHEP) is a major series of international conferences for physicists and computing professionals from
Status and Integration of AP2 Monitoring and Online Steering
Status and Integration of AP2 Monitoring and Online Steering Daniel Lorenz - University of Siegen Stefan Borovac, Markus Mechtel - University of Wuppertal Ralph Müller-Pfefferkorn Technische Universität
MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper
Migrating Desktop and Roaming Access Whitepaper Poznan Supercomputing and Networking Center Noskowskiego 12/14 61-704 Poznan, POLAND 2004, April white-paper-md-ras.doc 1/11 1 Product overview In this whitepaper
The Data Quality Monitoring Software for the CMS experiment at the LHC
The Data Quality Monitoring Software for the CMS experiment at the LHC On behalf of the CMS Collaboration Marco Rovere, CERN CHEP 2015 Evolution of Software and Computing for Experiments Okinawa, Japan,
HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS EKSPERYMENTY FIZYKI WYSOKICH ENERGII W SIECIACH KOMPUTEROWYCH GRID. 1.
Computer Science Vol. 9 2008 Andrzej Olszewski HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS The demand for computing resources used for detector simulations and data analysis in High Energy
Virtualization, Grid, Cloud: Integration Paths for Scientific Computing
Virtualization, Grid, Cloud: Integration Paths for Scientific Computing Or, where and how will my efficient large-scale computing applications be executed? D. Salomoni, INFN Tier-1 Computing Manager [email protected]
CMS Tier-3 cluster at NISER. Dr. Tania Moulik
CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend
