From raw data to Pbytes on disk The world wide LHC Computing Grid
|
|
|
- Cody Caldwell
- 10 years ago
- Views:
Transcription
1 The world wide LHC Computing Grid HAP Workshop Bad Liebenzell, Dark Universe Nov. 22nd KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association Institut Abteilung
2 Large Hadron Collider and Experiments The LHC is addressing fundamental questions of contemporary particle physics and cosmology: - Origin of mass the Higgs mechanism - What is dark matter? - Matter antimatter asymmetry? - Matter in the early universe? - hints for Quantum Gravity?? - Dark Energy??? 2
3 Particle Physics is international teamwork ~300 institutes fromeurope, ~300 institutes elsewhere, 3 ~ 7000 Users ~ 4000 Users
4 Particle Physics international 4
5 LHC Parameter Luminosity: ~1034 cm-2s-1 design, already exceeded by now max. 2835x2835 Proton-Proton bunches 1011 protons/bunch proton energie: 7 TeV collision rate: 40 Mhz up to 109 pp-interactions/sec 4 detektors ATLAS mulit-purpose, pp CMS multi-purpose, pp LHCb asymmetric for b physics ALICE heavy ion physics (Pb Pb and p Pb) 5
6 Data sources ATLAS LHCb Built by institutes all over the world, ~10'000 physicists from ~70 countries Each detector has more than 100 million sensors 40 million pp-collisions recorded per second detetors specialized on different questions Alice CMS 6
7 The Challenge Rates of interesting physics ~1010 times smaller than inelastic pp cross section 7
8 Data Sources example CMS 40 MH z ( Lev 1000 T B/s 100 el 1 ec) -H k H equ a z (1 L ev r d iva war el 2 00 len GB e O t 10 / s n e l ine k Hz c d F igit ar m (1 0 ize GB d) /se ~30 c) 0H z (5 00 MB /se 1 c) wo r phy ld-wid com sics e mu nity 8 ~ 100 Millionen detector cells LHC collision rate: 40 MHz bit/cell ~1000 Tbyte/s raw data Zero-suppression and Trigger reduce this to only some 100 Mbyte/s i.e. 1 /sec Distributed DistributedComputing Computing isisideal to do ideal to dothis thisjob job!!
9 CMS Trigger & DAQ Detectors 40 MHz COLLISION RATE Charge Time Pattern 16 Million channels 3 Gigacell buffers 100 khz LEVEL-1 TRIGGER Energy Tracks 1 Megabyte EVENT DATA 1 Terabit/s 200 Gigabyte BUFFERS (50000 DATA CHANNELS) 500 Readout memories EVENT BUILDER. A large switching 500 Gigabit/s Networks network ( ports) with a total throughput of approximately 500 Gbit/s forms the interconnection between the sources (Readout Dual Port Memory) and the destinations (switch to Farm Interface). The Event Manager collects the status and request of event filters and distributes event building commands (read/clear) to RDPMs 5 TeraIPS EVENT FILTER. It consists of a set of high performance commercial processors organized into many farms convenient for on-line and off-line applications. The farm architecture is such that a single CPU processes one event Gigabit/s SERVICE LAN 9 Computing services Petabyte ARCHIVE
10 The output: reconstructed events 10
11 Computing Models a hierarciacal structure example: C MS 7 Tier-1 centres ~50 Tier-2 centres CMS Computing TDR, 2005 LHC-Experiments - typically share big Tier-1s, take responsibility for experiment-specific services - have a large number of Tier2s, usually supporting only one experiment - have an even larger number of Tier-3s without any obligations towards WLCG 11
12 The Grid Paradigm... came at the right time to provide a model for distributed computing in HEP C. Kesselman, I. Foster, 1998 Coordinates resources that are not subject to central control... using standard, open, general-purpose protocols and interfaces to deliver nontrivial quality of services I. Foster,
13 Why Grid is well suited for HEP Experimental HEP codes key characteristics: modest memory requirement (~2GB) & modest floating point perform well on PCs independent events easy parallelism large data collections (TBytes) shared by very large collaborations 13
14 Grid for LHC Given the international and collaborative nature of HEP computing must be distributed harvest intellectual contributions from all partners, also funding issues Early studies in 1999 (Monarc Study group) suggested a hierarcical approach, following the typical data reduction schemes usually adopted in data analysis in high enery physics Grid paradigm came at the right time and was adopted by LHC physicists as the base line for distributed computing Major contributions by physicists to developments in Grid computing Other HEP communities also benefit and contributed 14
15 WLHC Computing Grid in action Source: A truly international, world-spanning Grid for LHC data processing, simulation and analysis 15
16 WLHC Computing Grid in action 365d x 24h Source: A truly international, world-spanning Grid for LHC data processing, simulation and analysis 16
17 Structure of the LHC Grid A grid with hierarcies and different tasks at different levels In addition, it is a Grid of Grids with interoperability between different middlewares: - glite middleware in most of Europe - Open Science Grid in USA - NorduGrid in Northern Europe - Alien by the Alice collaboration 17
18 WLCG: Grid with hierarcical structure Tier-0 the accelerator centre Data acquisition & initial processing Long-term data curation Distribution of data to T1/T2 11 Tier-1 Centres Tier-1 Canada Triumf (Vancouver) Spain PIC (Barcelona) Taiwan Academia SInica (Taipei) France IN2P3 (Lyon) UK CLRC (Oxford) Germany LITe US FermiLab (Illinois) Italy CNAF (Bologna) Brookhaven (NY) Netherlands NIKHEF/SARA (Amsterdam) Nordic countries distributed Tier-1 Tier-2 Tier-3 18 online to the data acquisition process high availability Managed Mass Storage grid-enabled data service Data-intensive analysis National, regional support 150 Centres in 60 Federations in 35 countries End-user (physicist, research group) analysis & collaboration with T3 (= institute resources) where the discoveries are made Monte Carlo Simulation several 100 grid-enabled PC institutes
19 Organisation of the World-wide LHC computing Grid Grids can`t work without an organisational structure representing all parties involved 19
20 What Particle Physicists do on the Grid CPU-intense simulation of particle physics reactions and detector response processing (=reconstruction) of large data volumes I/O-intense filtering and distribution of data transfer to local clusters and workstations for final physics interpretation 20
21 Typical workflows on the Grid 21
22 How big is WLCG? CPU (HS06) T0 - T1 - T T T2 T0 Disk (TB) T0 - T1 - T T T T The largest Science Grid in the World 22
23 How big is WLCG today and compared with others? Other big players? Total CPU 2013: 2010 khs06 approx. equiv. 200'000 CPU cores Total Disk 2013: 190 PB, same amount as Tape Storage 190 PB 23 new HPC Cluster in Munich (SuperMuc, #64 on Top 500 list) 155 k Cores, 10 PB disk 1&1 Rechenzentrum Karlsruhe 30'000 Server, ~2 PB/Monat I/O Amazon Cloud: ~ 500 PB Facebook ~30 PB Google Data Center power 200 MW (WLCG: ~6MW)
24 A closer look to the surroundings of GridKa GridKa supports >20 T2s in 6 countries, provides ~15% of WLCG T1 resoucres Alice T2 sites in Russia Göttingen Most Mostcomplex complex T2 T2cloud cloud of of any anyt1 T1inin WLCG WLCG 24
25 The World-wide LHC Computing Grid After thee years of experience with LHC operation: How well did it work? 25
26 After 2 years of experience - Did it work? Up to the users to give feedback: 26 D. Charlton,ATLAS, EPS HEP 2011
27 Did it work (2) G. Tonelli, CMS, EPS HEP
28 Did it work? Obviously it did! - Grid infrastructure for the LHC performed extremely well - physics results from freshly recorded data - but: effort for running computing infrastructure is high! Let`s have a look in detail... 28
29 Tier1 Jobs CMS CMS: ~1 million jobs/month ATLAS: ~5 million (shorter) jobs/month 29 on Tier1s
30 Tier2 Jobs ATLAS ATLAS: ~20 million jobs/month CMS: ~ 7 million jobs/month 30 on Tier2s
31 Monitoring the key to success WLCG Quality Monitoring of Data Transfer Matrix (by country) 31
32 Data rates Routinely run data transfers across the world at a transfer volume of ~50 TBytes/h 32
33 Again: Does it work? YES! Routinely runing ~1 million jobs per day on the Grid Shipping over 1 PB/day of data around data distribution to T2 works well very little is known about T3 usage and success rates responsibility of the institutes plenty of resources were available at LHC start-up, now reached resource limited operation Users have adapted to the GridWorld Grid is routinely used as a huge batch system, output is transferred home but... 33
34 Does it work? Message: it worked better than expected by many, but running such a complex computing infrastructure as the WLCG is tedious (and expensive!) Reliability and cost of operation can be improved by simplified and more robust middleware redundancy of services and sites, requires dynamic placement of data and investment in network bandwidh automated monitoring and triggering of actions use of commercially supported approaches to distributed computing: - private clouds are particularly important for shared resources at universities - eventually off-load simple tasks (simulation, statistics calculations) to commercial clouds Let`s have a look at some future developments... 34
35 LHCONE Atlas ATLAS and CMS computing models differ slightly CMS CMS already more distributed Aim of LHC ONE project is better trans-regional networking for data analysis, complementary to LHCOPN network connecting LHC T1s - flat(er) hierarchy: any site has access to any other site's data - dynamic data caching: pull data on demand - remote data access: jobs may use data remotely by interconnecting open exchange points between regional networks 35
36 LHCONE (2) Schamatic layout of LHCONE network infrastrucure A dedicated HEP network infrastrucure what is the cost? 36
37 Virtualisation in a nutshell: - Clouds offer Infrastructure as a Service - easy provision of resources on demand even by including (private) clould resources as a classical batch queue (e.g. ROCED project developed at EKP, KIT) - independent of local hardware and operating system (Scientific Linux 5 for Grid middleware and experiment software) 37
38 CernVM & CernVM-FS CernVM is a virtual machine ( Virtual Software Appliance ) based on Scientific Linux with CERN software environment, runs on CernVM-FS is a client-server file system based on http and implemented as as user-space file system optimized for readonly access to software repositories with a performant caching mechanism. Allows a CernVM instance to efficiently access software installed remotely. 38
39 Most important challenge: Hardware evolves # of transistors per chip keeps growing exponentially but clock rates saturate instead: more CPU cores on a chip! 39 affects the way we (have to) write software
40 Software challenges are not an academic problem, already hit us: CPU time as a function of simultaneous collisions in an event Such a high pile-up environment will become normal with rising LHC luminosity Event recorded by CMS with 78 simultaneously colliding protons in a bunch 40
41 Software Challenges Need (to learn) new ways of writing code exploit vector units on CPU cores: data oriented programming (to replace object oriented programming in time-critical algorithms) exploit auto-vectorisation in new gcc to use Single Instrucion Multiple Data (MMX, SSE(2), AVX,...) features of modern CPUs memory / core remains roughly constant multi-threading and multi-cores one job per core, i.e. trivial parallelism does not work any more requires clever workflows and thread control use of Graphical Processing Units (GPUs) for time-critical parts development environments get more user-friendly, will have to learn this! Hint: 41 there will be a course on Parallel Software Development at the 1st KSETA topical days in February 2013 by Benedikt Hegner (CERN) et al. (from whom I've stolen some of the material just shown)
42 Outlook existing LHC Grid provides exellent performance to particle physics situation was rather comfortable in the past: sit and wait for better hardware this will change: have to adapt to new hardware environments??? Will PhD students be able to master parallel programming & physics & detecotor service tasks??? need new, parallel-capable frame works resource demands of expriments will rise: detector upgrades with more readout channels higher trigger rates higher luninosity and more crowded events will count on an evolutionary ansatz, as we did so far new, open and standard ways for distributing computing emerged: various variants of cloud computing to be integrated in computing models network bandwith will continue rising, decouples storage from CPU, enables clever, transregional data management 42
(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015
(Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:
Grid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
Computing at the HL-LHC
Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,
Tier0 plans and security and backup policy proposals
Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)"
The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)" J.A. Coarasa " CERN, Geneva, Switzerland" for the CMS TriDAS group." " CHEP2013, 14-18 October 2013, Amsterdam, The Netherlands
Performance monitoring at CERN openlab. July 20 th 2012 Andrzej Nowak, CERN openlab
Performance monitoring at CERN openlab July 20 th 2012 Andrzej Nowak, CERN openlab Data flow Reconstruction Selection and reconstruction Online triggering and filtering in detectors Raw Data (100%) Event
CMS Tier-3 cluster at NISER. Dr. Tania Moulik
CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend
Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept
Integration of Virtualized Workernodes in Batch Queueing Systems, Dr. Armin Scheurer, Oliver Oberst, Prof. Günter Quast INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK FAKULTÄT FÜR PHYSIK KIT University of the
Bulletin. Introduction. Dates and Venue. History. Important Dates. Registration
Bulletin Introduction The International Conference on Computing in High Energy and Nuclear Physics (CHEP) is a major series of international conferences for physicists and computing professionals from
Status and Evolution of ATLAS Workload Management System PanDA
Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The
Performance monitoring of the software frameworks for LHC experiments
Performance monitoring of the software frameworks for LHC experiments William A. Romero R. [email protected] J.M. Dana [email protected] First EELA-2 Conference Bogotá, COL OUTLINE Introduction
New Design and Layout Tips For Processing Multiple Tasks
Novel, Highly-Parallel Software for the Online Storage System of the ATLAS Experiment at CERN: Design and Performances Tommaso Colombo a,b Wainer Vandelli b a Università degli Studi di Pavia b CERN IEEE
Linux and the Higgs Particle
Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004 Outline What is CERN The Physics The Physics Tools The Accelerator The
LHC schedule: what does it imply for SRM deployment? [email protected]. CERN, July 2007
WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? [email protected] WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.
Computing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
Cluster, Grid, Cloud Concepts
Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of
Big Data Processing Experience in the ATLAS Experiment
Big Data Processing Experience in the ATLAS Experiment A. on behalf of the ATLAS Collabora5on Interna5onal Symposium on Grids and Clouds (ISGC) 2014 March 23-28, 2014 Academia Sinica, Taipei, Taiwan Introduction
The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group
The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links Filippo Costa on behalf of the ALICE DAQ group DATE software 2 DATE (ALICE Data Acquisition and Test Environment) ALICE is a
Four Keys to Successful Multicore Optimization for Machine Vision. White Paper
Four Keys to Successful Multicore Optimization for Machine Vision White Paper Optimizing a machine vision application for multicore PCs can be a complex process with unpredictable results. Developers need
Overview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group
DSS High performance storage pools for LHC Łukasz Janyst on behalf of the CERN IT-DSS group CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Introduction The goal of EOS is to provide a
Running a typical ROOT HEP analysis on Hadoop/MapReduce. Stefano Alberto Russo Michele Pinamonti Marina Cobal
Running a typical ROOT HEP analysis on Hadoop/MapReduce Stefano Alberto Russo Michele Pinamonti Marina Cobal CHEP 2013 Amsterdam 14-18/10/2013 Topics The Hadoop/MapReduce model Hadoop and High Energy Physics
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Performance Monitoring of the Software Frameworks for LHC Experiments
Proceedings of the First EELA-2 Conference R. mayo et al. (Eds.) CIEMAT 2009 2009 The authors. All rights reserved Performance Monitoring of the Software Frameworks for LHC Experiments William A. Romero
Enabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
Big Data Needs High Energy Physics especially the LHC. Richard P Mount SLAC National Accelerator Laboratory June 27, 2013
Big Data Needs High Energy Physics especially the LHC Richard P Mount SLAC National Accelerator Laboratory June 27, 2013 Why so much data? Our universe seems to be governed by nondeterministic physics
Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC
EGEE and glite are registered trademarks Enabling Grids for E-sciencE Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC Elisa Lanciotti, Arnau Bria, Gonzalo
Online Monitoring in the CDF II experiment
Online Monitoring in the CDF II experiment 3t, Tetsuo Arisawa 4, Koji Ikado 4, Kaori Maeshima 1, Hartmut Stadie 3, Greg Veramendi 2, Hans Wenzel 1 1 Fermi National Accelerator Laboratory, Batavia, U.S.A.
Big Data Analytics. for the Exploitation of the CERN Accelerator Complex. Antonio Romero Marín
Big Data Analytics for the Exploitation of the CERN Accelerator Complex Antonio Romero Marín Milan 11/03/2015 Oracle Big Data and Analytics @ Work 1 What is CERN CERN - European Laboratory for Particle
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
HPC technology and future architecture
HPC technology and future architecture Visual Analysis for Extremely Large-Scale Scientific Computing KGT2 Internal Meeting INRIA France Benoit Lange [email protected] Toàn Nguyên [email protected]
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
Summer Student Project Report
Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June September 2014 Abstract This report will outline two projects that were done as part of a three months
Using S3 cloud storage with ROOT and CernVMFS. Maria Arsuaga-Rios Seppo Heikkila Dirk Duellmann Rene Meusel Jakob Blomer Ben Couturier
Using S3 cloud storage with ROOT and CernVMFS Maria Arsuaga-Rios Seppo Heikkila Dirk Duellmann Rene Meusel Jakob Blomer Ben Couturier INDEX Huawei cloud storages at CERN Old vs. new Huawei UDS comparative
From Distributed Computing to Distributed Artificial Intelligence
From Distributed Computing to Distributed Artificial Intelligence Dr. Christos Filippidis, NCSR Demokritos Dr. George Giannakopoulos, NCSR Demokritos Big Data and the Fourth Paradigm The two dominant paradigms
Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
Next Generation Operating Systems
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015 The end of CPU scaling Future computing challenges Power efficiency Performance == parallelism Cisco Confidential 2 Paradox of the
HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer. Frank Würthwein Rick Wagner August 5th, 2013
Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer Frank Würthwein Rick Wagner August 5th, 2013 The Universe is a strange place! 67% of energy is dark energy We got no
Shoal: IaaS Cloud Cache Publisher
University of Victoria Faculty of Engineering Winter 2013 Work Term Report Shoal: IaaS Cloud Cache Publisher Department of Physics University of Victoria Victoria, BC Mike Chester V00711672 Work Term 3
PES. Batch virtualization and Cloud computing. Part 1: Batch virtualization. Batch virtualization and Cloud computing
Batch virtualization and Cloud computing Batch virtualization and Cloud computing Part 1: Batch virtualization Tony Cass, Sebastien Goasguen, Belmiro Moreira, Ewan Roche, Ulrich Schwickerath, Romain Wartel
Preview of a Novel Architecture for Large Scale Storage
Preview of a Novel Architecture for Large Scale Storage Andreas Petzold, Christoph-Erdmann Pfeiler, Jos van Wezel Steinbuch Centre for Computing STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the
IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez
IT of SPIM Data Storage and Compression EMBO Course - August 27th Jeff Oegema, Peter Steinbach, Oscar Gonzalez 1 Talk Outline Introduction and the IT Team SPIM Data Flow Capture, Compression, and the Data
GridKa: Roles and Status
GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre
Software Scalability Issues in Large Clusters
Software Scalability Issues in Large Clusters A. Chan, R. Hogue, C. Hollowell, O. Rind, T. Throwe, T. Wlodek Brookhaven National Laboratory, NY 11973, USA The rapid development of large clusters built
Building a Volunteer Cloud
Building a Volunteer Cloud Ben Segal, Predrag Buncic, David Garcia Quintas / CERN Daniel Lombrana Gonzalez / University of Extremadura Artem Harutyunyan / Yerevan Physics Institute Jarno Rantala / Tampere
U-LITE Network Infrastructure
U-LITE: a proposal for scientific computing at LNGS S. Parlati, P. Spinnato, S. Stalio LNGS 13 Sep. 2011 20 years of Scientific Computing at LNGS Early 90s: highly centralized structure based on VMS cluster
KIT Site Report. Andreas Petzold. www.kit.edu STEINBUCH CENTRE FOR COMPUTING - SCC
KIT Site Report Andreas Petzold STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu GridKa Tier 1 - Batch
High Availability Databases based on Oracle 10g RAC on Linux
High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database
HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS EKSPERYMENTY FIZYKI WYSOKICH ENERGII W SIECIACH KOMPUTEROWYCH GRID. 1.
Computer Science Vol. 9 2008 Andrzej Olszewski HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS The demand for computing resources used for detector simulations and data analysis in High Energy
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version The Particle Physics Experiment Consolidated Grant proposals now being submitted
CMS Level 1 Track Trigger
Institut für Technik der Informationsverarbeitung CMS Level 1 Track Trigger An FPGA Approach Management Prof. Dr.-Ing. Dr. h.c. J. Becker Prof. Dr.-Ing. Eric Sax Prof. Dr. rer. nat. W. Stork KIT University
Understanding the Benefits of IBM SPSS Statistics Server
IBM SPSS Statistics Server Understanding the Benefits of IBM SPSS Statistics Server Contents: 1 Introduction 2 Performance 101: Understanding the drivers of better performance 3 Why performance is faster
US NSF s Scientific Software Innovation Institutes
US NSF s Scientific Software Innovation Institutes S 2 I 2 awards invest in long-term projects which will realize sustained software infrastructure that is integral to doing transformative science. (Can
Tandberg Data AccuVault RDX
Tandberg Data AccuVault RDX Binary Testing conducts an independent evaluation and performance test of Tandberg Data s latest small business backup appliance. Data backup is essential to their survival
Network Performance Optimisation and Load Balancing. Wulf Thannhaeuser
Network Performance Optimisation and Load Balancing Wulf Thannhaeuser 1 Network Performance Optimisation 2 Network Optimisation: Where? Fixed latency 4.0 µs Variable latency
Site specific monitoring of multiple information systems the HappyFace Project
Home Search Collections Journals About Contact us My IOPscience Site specific monitoring of multiple information systems the HappyFace Project This content has been downloaded from IOPscience. Please scroll
Manjrasoft Market Oriented Cloud Computing Platform
Manjrasoft Market Oriented Cloud Computing Platform Aneka Aneka is a market oriented Cloud development and management platform with rapid application development and workload distribution capabilities.
Distributed applications monitoring at system and network level
Distributed applications monitoring at system and network level Monarc Collaboration 1 Abstract Most of the distributed applications are presently based on architectural models that don t involve real-time
ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC
ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC Wensheng Deng 1, Alexei Klimentov 1, Pavel Nevski 1, Jonas Strandberg 2, Junji Tojo 3, Alexandre Vaniachine 4, Rodney
The STAR Level-3 Trigger System
Jens Sören Lange University of Frankfurt The STAR Level-3 Trigger System C. Adler a, J. Berger a, M. DeMello b, D. Flierl a, J. Landgraf c, J.S. Lange a, M.J. LeVine c, A. Ljubicic Jr. c, J. Nelson d,
The GRID and the Linux Farm at the RCF
The GRID and the Linux Farm at the RCF A. Chan, R. Hogue, C. Hollowell, O. Rind, J. Smith, T. Throwe, T. Wlodek, D. Yu Brookhaven National Laboratory, NY 11973, USA The emergence of the GRID architecture
Implementing Offline Digital Video Storage using XenData Software
using XenData Software XenData software manages data tape drives, optionally combined with a tape library, on a Windows Server 2003 platform to create an attractive offline storage solution for professional
