LHC GRID computing in Poland
|
|
|
- Esmond Hill
- 10 years ago
- Views:
Transcription
1 POLAND LHC GRID computing in Poland Michał Turała IFJ PAN/ ACK Cyfronet AGH, Kraków Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa,
2 Outline Computing needs of LHC experiments WLCG -World Wide LHC Computing Grid Polish participation in WLCG Long term perspectives Conclusions 2
3 Computing needs of LHC POLAND experiments Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa,
4 Challenges of LHC experiments High radiation (radiation resistance) Complex events (high granularity of detectors, long processing time) Very rare events(preselection) Huge volume of data (registration, storage) World-wide access to the data(networking) Large, distributed collaborations(coordination) Long-lasting experiments (documentation) 4
5 Data preselection in real time -many different physics processes -several levels of filtering -high efficiency for events of interest -total reduction factor of about 10 7 LHC data preselection 5 40 MHz (1000 TB/sec) equivalent) (1000 TB/sec) equivalent) Level 1 - Special Hardware 75 KHz 75 KHz (75 GB/sec)fully digitised (75 GB/sec)fully digitised 5 KHz (5 GB/sec (5 GB/sec) Level 2 - Embedded Processors/Farm 100 Hz 100 Hz (100 MB/sec) (100 MB/sec) Level 3 Farm of commodity CPU Data Recording Recording & Offline Offline Analysis Analysis
6 Data rate of LHC experiments Rate [Hz] RAW [MB] ESD Reco [MB] AOD [kb] Monte Carlo [MB/evt] Monte Carlo % of real ALICE HI For LHC computing, 100M SpecInt2000 ALICE pp or 100K of 3GHz Pentium 4 are needed! ATLAS For data storage, 20 Peta Bytes or CMS LHCb K of disks/tapes per year is needed! First data in seconds/year pp from 2009 on ~10 9 events/experiment 10 6 seconds/year heavy ion 6
7 HEP networking needs ICFA Network Task Force (1998): required network bandwidth (Mbps) BW Utilized Per Physicist (and Peak BW Used) BW Utilized by a University Group BW to a Home Laboratory Or Regional Center BW to a Central Laboratory Housing Major Experiments (0.5-2) (2-10) (10 100) BW on a Transoceanic Link
8 LHC computing model
9 POLAND World Wide LHC Computing Grid Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa,
10 LHC Grid computing model The LCG project was launched in Uni x Lab m 2002, with a goal to demonstrate Lancs the vialability of such USA concept Lab a Brookhaven UK Tier3 USA FermiLab France The Memorandum Physics Tier The LHC 1 of Understanding CERN Uni n Department Tier2 Computing Facility Tier0 Tier1 to build World-wideLHC Italy Computing Desktop Grid γ has Lab been NL Germany b prepared and signed Lab c by almost β all countries/ agencies Uni y α Uni b participating in the LHC program 10
11 LCG Required networking National Reasearch Networks (NRENs) at Tier-1s: ASnet LHCnet/ESnet GARR LHCnet/ESnet RENATER DFN SURFnet6 NORDUnet RedIRIS UKERNA CANARIE For Tier-2s GEANT NRENs e.g. Polish Tier2 Michal M. Turala Turala 11
12 LHC computing needs Summary of Computing Resource Requirements All experiments From LCG TDR - June 2005 CERN All Tier-1s All Tier-2s Total CPU (MSPECint2000s) Disk (PetaBytes) Tape (PetaBytes) CPU Disk Tape All Tier-2s 43% CERN 18% All Tier-2s 33% CERN 12% CERN 34% All Tier-1s 39% All Tier-1s 55% All Tier-1s 66% 12
13 LCG Grid activity 230k /day These workloads (reported across all WLCG centres) are at the level anticipated for 2008 data taking from I. Bird, April
14 LCG CP usage ~ 90% of CPU Usage is external to CERN (not all external resources taken into account) from I. Bird, April
15 LCG Tier2 sites recent usage from I. Bird, April
16 LCG CERN data export target from I. Bird, Apr
17 LCG Reliability In February 2008 All Tier 1 and 100 Tier 2 sites reported reliabilities from I. Bird, Apr
18 LCG Rump-up needed before LHC start 3 x 2.9 x 3.7 x 2.3 x 3.7 x from L. Robertson Oct
19 POLAND Poland in WLCG Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa,
20 KOSZALIN GDAŃSK OLSZTYN BASNET 34 Mb/s PIONIER 4Q2006 GÉANT Gb/s DFN 2x10 Gb/s SZCZECIN POZNAŃ BYDGOSZCZ TORUŃ BIAŁYSTOK 2 x 10 Gb/s (2 λ) GÉANT2/Internet 7,5 Gb/s Internet 2,5 Gb/s WROCŁAW ZIELONA GÓRA OPOLE ŁÓDŹ CZĘSTOCHOWA RADOM KIELCE WARSZAWA PUŁAWY LUBLIN CBDF 10Gb/s (2 λ) CBDF 10Gb/s (1 λ) KATOWICE RZESZÓW MAN KRAKÓW BIELSKO-BIAŁA CESNET, SANET 10 Gb/s 20 Symposium LHC, Warszawa,
21 Polish participation in networking and grid projects PIONIER, CrossGrid and EGEE were essential for the development of Polish WLCG infrastructure Symposium LHC, Warszawa,
22 LCG ACC Cyfronet AGH in LCG-1 Sept. 2003: Sites taking part in the initial LCG service (red dots) Small Test clusters at 14 institutions; Grid middleware package (mainly parts of EDG and VDT) a global Grid testbed Kraków Poland Karlsruhe Germany from K-P. Mickel at CGW03 This was the very first really running global computing and data Grid, which covered participants on three continents 22
23 Networking and computational infrastructure of Polish Tier2 for WLCG 23
24 Polish infrastructure for WLCG ACK Cyfronet AGH Tier2 for LCG: 450 ksi2k, 50 TB PCSS Poznań Tier2 LCG ~400 ksi2k, 16 TB ICM Warszawa Tier2 for LCG: 350 ksi2k, 40 TB IFJ PAN Cracow Tier3 at IPJ Warsaw soon Symposium LHC, Warszawa,
25 Resources of EGEE for WLCG 25
26 Data transfer FZK-Cyfronet FTS FZK dcache Star-FZK: Parallel streams: 10 Parallel files: 10 Star-CYFRONET: Parallel streams: 10 Parallel files: 20 Kraków FZK 20 June 2007 FZK Kraków 4-5 files only active 26
27 LCG Polish CP usage PCSS Poznan ICM Warsaw ACK Cyfronet Krakow RTM snapshot of 15 Apr
28 Poland in WLCG In 2006 Poland has signed the Memorandum of Understanding (MoU) on the participation in the WLCG project as distributed Tier2, which includes computing centers of Cracow (ACK Cyfronet AGH), Poznan (PSNC) and Warsaw (ICM), According to this agreement in 2007 Poland should provide for LHC experiments about 650 processors and about 60 TB of disk storage; during 2008 these numbers should increase by about a factor of 2. From the WLCG accounting summary for January 2008 Reliability Availability Oct.07 Nov.07 Dec.07 28
29 Future development Tier1 Tier2 Poland For ATLAS Disk/CPU=44% For ATLAS Disk/CPU=27% 29
30 POLTIER a national network on LHC computing Created in 2006 by 8 research institution: IFJ PAN, PK, UST AGH, UJ Krakow; PSNC Poznan; IFD UW, IPJ, PW Warsaw with IFJ PAN being a coordinator A main goal of the network is to coordinate Polish effort towards LHC computing, including coordination with relevant international bodies (CERN, LCG, GDB, FZK Karlsruhe, others) Several local meetings took place: Warsaw (2006), Poznan (2006), Krakow (2007), Warsaw (2007); participation to international meetings is limited due to lack of finding, Financial support is badly needed first application of 2006 was rejected on formal grounds; the 2007 request awaits decision. 30
31 POLAND Polish National Grid Initiative Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa,
32 Polish participation in networking and grid projects GEANT GEANT2 European e-infrastructure EGI National Grid Initiative PL-Grid Symposium LHC, Warszawa,
33 EGEE CEGC ROC Coordinated by Poland Very recently Belarussia become a new member of CEGC ROC Poland is also a member of Baltic Grid projects, which includes Estonia, Lithuania and Latvia 33 33
34 Resources of EGEE CEGC ROC These resources are mainly used by physics, bioinformatics and chemistry 34 34
35 Resources of EGEE CEGC ROC Large fraction of these resources is provided by the Polish computing centers 35 35
36 Users of the Polish EGEE Grid Physics, Biomedical, Chemistry 36
37 PL-Grid Initial Phase Creation of Polish Grid (PL-Grid) Consortium Agreement signed in January 2007 Preparation of PL-Grid Project ( , ) Consortium made up of five largest Polish supercomputing and networking centers (founders): GEANT2 Academic Computer Center Cyfronet AGH (ACK CYFRONET AGH) Coordinator Poznań Supercomputing and Networking Center (PCSS) Wrocław Centre for Networking and Supercomputing (WCSS) Academic Computer Center in Gdańsk (TASK) Interdisciplinary Center for Math. and Computat. Modelling, Warsaw University (ICM) Symposium LHC, Warszawa,
38 PL-Grid Project Aims The main aim of the PL-Grid Project is to create and develop a stable Polish Grid infrastructure, fully compatible and interoperable with European and worldwide Grids. The Project should provide scientific communities in Poland with Grid services, enabling realization of the e-science model in various scientific fields. Similarly to solutions adopted in other countries, the Polish Grid should have a common base infrastructure. Specialized, domain Grid systems including services and tools focused on specific types of applications should be built upon this infrastructure. 38
39 PL-Grid generic architecture These domain Grid systems can be further developed and maintained in the framework of separate projects. Such an approach should enable efficient use of available financial resources. Advanced Service Platforms Application Application Application Application Domain Grid Domain Grid Domain Grid Domain Grid Grid infrastructure (Grid services) PL-Grid Clusters High Performance Computers Data repositories National Computer Network PIONIER 39
40 PL-Grid Workpackages Project management (including PL-Grid architecture and dissemination) coordinated by ACK CYFRONET AGH (Krakow), Planning and development of infrastructure TASK (Gdansk), Operations Center ACK CYFRONET AGH Development and installation of middleware PCSS (Poznan), Support for domain Grids ICM (Warsaw), Security Center WCSS (Wrocław) 40
41 PL-Grid Project The Consortium has prepared a PL-Grid Project proposal, to be financed by national funds over a three-year period ( ); it needs un update to satisfy requirements of the recent call, The Consortium plans to continue the Project (-s) in the years , however the next phase will depend on the results of the first one, as well as on user needs and international developments in this field. The PL-Grid Project is consistent with long term plans on the development of Polish IT infrastructure, The PL-Grid Initiative aligns well with the European Grid Initiative, towards creation in Europe a sustainable production Grid for e-science 41
42 POLAND Conclusions Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa,
43 Conclusions Poland is well placed in the framework of European networking and grid computing, which is used by local research groups (in particular by physicists, bioinformatics, chemistry...), Polish physicists and computer scientists created distributed LCG Tier2, which availability and efficiency is high; the delivered computing power is in the range 2-3 % of the total, the disk storage needs an increase, There is a systematic effort in Poland to develop national grid infrastructure for science, in particular the PL-Grid, to be able to participate efficiently in the forthcoming EU and other projects, The networking and grid computing in Eastern European countries (in particular in the Baltic States) is improving with the help of EU and the neighbouring countries, including Poland; we would be ready to play a role of a regional EGI coordinator. 43
44 POLAND Acknowledgements Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa,
45 Acknowledgements The development of Polish Grid computing, including the LHC Computing Grid infrastructure (Polish Tier2), was possible due to: Contributions from the number of physicsists, computer scientists and engineers of Krakow, Poznan and Warsaw, including: M. Bluj, T. Bold, M. Bubak, M. Dwuznik, R. Gokieli, J. Kitowski, A. Kusznir, R. Lichwala, P. Malecki, N. Meyer, J. Nabrzyski, K. Nawrocki, P. Nyczyk, A. Olszewski, B. Palach, H. Palka, A. Padee, M. Plociennik, M. Radecki, L. Skital, M. Sterzel, T. Szepieniec, W. Wislicki, M. Witek, P. Wolniewicz, and others Continous support from the management of ACK Cyfronet AGH Krakow, ICM UW Warsaw and PSNC Poznan Continous collaboration and support from CERN side, particularly from the IT Division, and the Data Grid, EGEE and LCG Projects Financial support, although limited, from the Polish Ministry of Science and Computing Infrastructure/ Science and Higher Education 45
46 Acknowledgements The material in this presentation comes largely from several talks and reports provided by J. Kitowski (PL-Grid), P. Lason and M. Radecki (EGEE), R. Lichwala (PIONIER), R. Gokieli and A. Olszewski (Polish Tier2), I. Bird and L. Robertson (LCG) 46
47 POLAND Thank you for your attention Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa,
Service Challenge Tests of the LCG Grid
Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources
KMD National Data Storage
KMD National Data Storage in the PIONIER network Maciej Brzezniak, Norbert Meyer, Rafał Mikołajczak Maciej Stroiński Location igrid2005, Sept. 27th, 2005 SZCZE KOSZAL P GDAŃSK IN BYD GOSZCZ OZNAŃ TORUŃ
Poland. networking, digital divide andgridprojects. M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland
Poland networking, digital divide andgridprojects M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland M. Turala The Henryk Niewodniczanski Instytut of Nuclear Physics PAN and ACK
PIONIER the national fibre optic network for new generation services Artur Binczewski, Maciej Stroiński Poznań Supercomputing and Networking Center
PIONIER the national fibre optic network for new generation services Artur Binczewski, Maciej Stroiński Poznań Supercomputing and Networking Center e-irg Workshop October 12-13, 2011; Poznań, Poland 18th
HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS EKSPERYMENTY FIZYKI WYSOKICH ENERGII W SIECIACH KOMPUTEROWYCH GRID. 1.
Computer Science Vol. 9 2008 Andrzej Olszewski HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS The demand for computing resources used for detector simulations and data analysis in High Energy
TELEMEDICINE in POLAND
TELEMEDICINE in POLAND Antoni Nowakowski Department of Biomedical Engineering Gdansk University of Technology Gdańsk, POLAND Ministry of Science and Higher Education Committee of Information Infrastructure
Grid Activities in Poland
Grid Activities in Poland Jarek Nabrzyski Poznan Supercomputing and Networking Center [email protected] Outline PSNC National Program PIONIER Sample projects: Progress and Clusterix R&D Center PSNC was
Advanced Service Platform for e-science. Robert Pękal, Maciej Stroiński, Jan Węglarz (PSNC PL)
Advanced Service Platform for e-science Robert Pękal, Maciej Stroiński, Jan Węglarz (PSNC PL) PLATON Service Platform for e-science Operational Programme: Innovative Economy 2007-2013 Investments in development
Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
PLGrid Programme: IT Platforms and Domain-Specific Solutions Developed for the National Grid Infrastructure for Polish Science
1 PL-Grid: Polish Infrastructure for Supporting Computational Science in the European Research Space PLGrid Programme: IT Platforms and Domain-Specific Solutions Developed for the National Grid Infrastructure
(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015
(Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:
LHC schedule: what does it imply for SRM deployment? [email protected]. CERN, July 2007
WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? [email protected] WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.
e-infrastructure and related projects in PSNC
e-infrastructure and related projects in PSNC Norbert Meyer PSNC ACTIVITY Operator of Poznań Metropolitan Area Network POZMAN Operator of Polish National Research and Education Network PIONIER HPC Center
IFJ PAN ATLAS Department. Michał Turała Kraków, 2 February 2005 1
IFJ PAN ATLAS Department Michał Turała Kraków, 2 February 2005 1 Outline Programme - preparation of the physics programme, - construction of detector components, - preparation for the data analysis Activities
Linux and the Higgs Particle
Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004 Outline What is CERN The Physics The Physics Tools The Accelerator The
Bob Jones Technical Director [email protected]
Bob Jones Technical Director [email protected] CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST-2003-508833 EGEE Goal & Strategy Goal: Create a wide
Remuneration offered to Specialists
Remuneration offered to Specialists Remuneration and Pay Rises by Provinces... 2 Salaries by Sectors... 7 Remuneration at Polish and Foreign Companies... 8 Remuneration at Companies with Varied Staffing
Mariusz-Jan Radło, Ph.D.
Offshoring and outsourcing of services: Evidence from Poland Mariusz-Jan Radło, Ph.D. ========================= Warsaw School of Economics, associate professor head of the Postgraduate Studies of Business
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
PLGrid Infrastructure Solutions For Computational Chemistry
PLGrid Infrastructure Solutions For Computational Chemistry Mariola Czuchry, Klemens Noga, Mariusz Sterzel ACC Cyfronet AGH 2 nd Polish- Taiwanese Conference From Molecular Modeling to Nano- and Biotechnology,
Cloud services in PL-Grid and EGI Infrastructures
1 Cloud services in PL-Grid and EGI Infrastructures J. Meizner, M. Radecki, M. Pawlik, T. Szepieniec ACK Cyfronet AGH Cracow Grid Workshop 2012, Kraków, 22.10.2012 Overview 2 Different types of Compute
GridKa: Roles and Status
GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
Top 500 Innovators Society building modern science-industry collaboration
Top 500 Innovators Society building modern science-industry collaboration Conference "Science for Industry: Necessity is the mother of invention: Third Networking Event dedicated to the Polish experience
US NSF s Scientific Software Innovation Institutes
US NSF s Scientific Software Innovation Institutes S 2 I 2 awards invest in long-term projects which will realize sustained software infrastructure that is integral to doing transformative science. (Can
ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC
ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC Wensheng Deng 1, Alexei Klimentov 1, Pavel Nevski 1, Jonas Strandberg 2, Junji Tojo 3, Alexandre Vaniachine 4, Rodney
Computing at the HL-LHC
Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,
Deutsches Forschungsnetz
Deutsches Forschungsnetz GRIDs and Networks Complementary Infrastructures for the Scientific Community K.Schauerhammer, K. Ullmann DFN-Verein, Berlin DESY Hamburg, 8.12.2003 GRIDs - networked applications
EGEE is a project funded by the European Union under contract IST-2003-508833
www.eu-egee.org NA4 Applications F.Harris(Oxford/CERN) NA4/HEP coordinator EGEE is a project funded by the European Union under contract IST-2003-508833 Talk Outline The basic goals of NA4 The organisation
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
Global Grid User Support - GGUS - in the LCG & EGEE environment
Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni ([email protected]) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,
Grid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
Poland very heart of Europe
WELCOME TO POLAND . Poland very heart of Europe 3 The Group The largest hotel operator and leading hotel employer in Poland UP TO OVER 14,000 ROOMS 65 HOTELS HOTELS OPERATE UNDER THE BRANDS SOFITEL, NOVOTEL,
Software services competence in research and development activities at PSNC. Cezary Mazurek PSNC, Poland
Software services competence in research and development activities at PSNC Cezary Mazurek PSNC, Poland Workshop on Actions for Better Participation of New Member States to FP7-ICT Timişoara, 18/19-03-2010
PUBLIC TRANSPORT SYSTEMS IN POLAND: FROM BIAŁYSTOK TO ZIELONA GÓRA BY BUS AND TRAM USING UNIVERSAL STATISTICS OF COMPLEX NETWORKS
Vol. 36 (2005) ACTA PHYSICA POLONICA B No 5 PUBLIC TRANSPORT SYSTEMS IN POLAND: FROM BIAŁYSTOK TO ZIELONA GÓRA BY BUS AND TRAM USING UNIVERSAL STATISTICS OF COMPLEX NETWORKS Julian Sienkiewicz and Janusz
Customs Service Better services in the age of digital lifestyle
Customs Service Better services in the age of digital lifestyle Editor: Marcin Woźniczko Text: Monika Szewczyk Cover design: Mirosław Wojtowicz DTP: Publishing House C&C Photos. Property of the Customs
Big Data Needs High Energy Physics especially the LHC. Richard P Mount SLAC National Accelerator Laboratory June 27, 2013
Big Data Needs High Energy Physics especially the LHC Richard P Mount SLAC National Accelerator Laboratory June 27, 2013 Why so much data? Our universe seems to be governed by nondeterministic physics
POLISH TELECOM SECURITY INCIDENT RESPONSE TEAM
POLISH TELECOM SECURITY INCIDENT RESPONSE TEAM Incident handling, statistics and procedures Warsaw, May 2003 TABLE OF CONTENTS I. INFORMATION ABOUT TP SECURITY INCIDENT RESPONSE TEAM 3 II. TP NETWORK 4
White Paper CLINICAL RESEARCH IN POLAND AN INTRODUCTION
White Paper CLINICAL RESEARCH IN POLAND AN INTRODUCTION Table of Contents 1. Healthcare Landscape in Poland... 3 2. Clinical Research Activity in Poland... 3 3. References... 6 4. About the Author... 6
Tier0 plans and security and backup policy proposals
Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle
Using S3 cloud storage with ROOT and CernVMFS. Maria Arsuaga-Rios Seppo Heikkila Dirk Duellmann Rene Meusel Jakob Blomer Ben Couturier
Using S3 cloud storage with ROOT and CernVMFS Maria Arsuaga-Rios Seppo Heikkila Dirk Duellmann Rene Meusel Jakob Blomer Ben Couturier INDEX Huawei cloud storages at CERN Old vs. new Huawei UDS comparative
DSA1.5 U SER SUPPORT SYSTEM
DSA1.5 U SER SUPPORT SYSTEM H ELP- DESK SYSTEM IN PRODUCTION AND USED VIA WEB INTERFACE Document Filename: Activity: Partner(s): Lead Partner: Document classification: BG-DSA1.5-v1.0-User-support-system.doc
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
Financing investments supported by EU funds The role of mbank
Financing investments supported by EU funds The role of mbank Jarosław Fordoński Head of European Union Office June 24th, 2015, Zurich mbank in Poland Profile of an innovative universal bank the first
Recycling and Waste-to-Energy in an era of implementation of the circular economy
Recycling and Waste-to-Energy in an era of implementation of the circular economy Ph.D. D.Sc. Tadeusz Pająk, AGH University of Science & Technology, Cracow Relevant legislation Act on maintaining cleanliness
Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer. Frank Würthwein Rick Wagner August 5th, 2013
Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer Frank Würthwein Rick Wagner August 5th, 2013 The Universe is a strange place! 67% of energy is dark energy We got no
LHCOPN and LHCONE an introduction
LHCOPN and LHCONE an introduction APAN workshop Nantou, 13 th August 2014 [email protected] CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it 1 Summary - WLCG - LHCOPN - LHCONE - L3VPN
MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper
Migrating Desktop and Roaming Access Whitepaper Poznan Supercomputing and Networking Center Noskowskiego 12/14 61-704 Poznan, POLAND 2004, April white-paper-md-ras.doc 1/11 1 Product overview In this whitepaper
Performance monitoring of the software frameworks for LHC experiments
Performance monitoring of the software frameworks for LHC experiments William A. Romero R. [email protected] J.M. Dana [email protected] First EELA-2 Conference Bogotá, COL OUTLINE Introduction
GRID computing at LHC Science without Borders
GRID computing at LHC Science without Borders Kajari Mazumdar Department of High Energy Physics Tata Institute of Fundamental Research, Mumbai. Disclaimer: I am a physicist whose research field induces
PORTA OPTICA STUDY Distributed Optical Gateway to Eastern Europe
PORTA OPTICA STUDY Distributed Optical Gateway to Eastern Europe www.porta-optica.org Artur Binczewski, Miłosz Przywecki, Maciej Stroiński, Wojciech Śronek, Jan Węglarz Poznan Supercomputing and Networking
Scientific Cloud Computing Infrastructure for Europe Strategic Plan. Bob Jones,
Scientific Cloud Computing Infrastructure for Europe Strategic Plan Bob Jones, IT department, CERN Origin of the initiative Conceived by ESA as a prospective for providing cloud services to space sector
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version The Particle Physics Experiment Consolidated Grant proposals now being submitted
ALICE GRID & Kolkata Tier-2
ALICE GRID & Kolkata Tier-2 Site Name :- IN-DAE-VECC-01 & IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :- INDIA Vikas Singhal VECC, Kolkata Events at LHC Luminosity : 10 34 cm -2 s -1 40 MHz every
Introduction to perfsonar
Introduction to perfsonar Loukik Kudarimoti, DANTE 27 th September, 2006 SEEREN2 Summer School, Heraklion Overview of this talk Answers to some basic questions The need for Multi-domain monitoring What
TransBaltic. Deployment of ICT toolbox supporting companies in optimal modal choice. TransBaltic
Towards an integrated transport system in the Baltic Sea Region Deployment of ICT toolbox supporting companies in optimal modal choice Institute of Logistics and Warehousing Poznan, Poland Leszek Andrzejewski
New Design and Layout Tips For Processing Multiple Tasks
Novel, Highly-Parallel Software for the Online Storage System of the ATLAS Experiment at CERN: Design and Performances Tommaso Colombo a,b Wainer Vandelli b a Università degli Studi di Pavia b CERN IEEE
Trial of the Infinera PXM. Guy Roberts, Mian Usman
Trial of the Infinera PXM Guy Roberts, Mian Usman LHC Workshop Recap Rather than maintaining distinct networks, the LHC community should aim to unify its network infrastructure Traffic aggregation on few
Online data handling with Lustre at the CMS experiment
Online data handling with Lustre at the CMS experiment Lavinia Darlea, on behalf of CMS DAQ Group MIT/DAQ CMS September 17, 2015 1 / 29 CERN 2 / 29 CERN CERN was founded 1954: 12 European States Science
Data storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
TransBaltic. Deployment of ICT toolbox supporting companies in optimal modal choice. TransBaltic
Towards an integrated transport system in the Baltic Sea Region Deployment of ICT toolbox supporting companies in optimal modal choice Institute of Logistics and Warehousing Poznan, Poland Leszek Andrzejewski
HEP computing and Grid computing & Big Data
May 11 th 2014 CC visit: Uni Trieste and Uni Udine HEP computing and Grid computing & Big Data CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Massimo Lamanna/CERN IT department - Data
ADVERTISEMENT PRICE LIST
ADVERTISEMENT PRICE LIST THIS PRICE LIST REMAINS VALID FROM 21 DECEMBER 2015 This price list is addressed to businesses and does not constitute information addressed to persons wishing to purchase the
The ING Foundation for Polish Arts
ING COMMERCIAL FINANCE POLAND ANNUAL REPORT 2012 The ING Foundation for Polish Arts All images presented in the Annual Report belong to The ING Foundation for Polish Arts. The ING Foundation for Polish
