Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini, Nikos Konstantinidis; CMS: Wesley Smith, Christoph Schwick, Ian Fisk, Peter Elmer ; LHCb: Renaud Legac, Niko Neufeld LHCb Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 1
Computing @ HL-HLC CPU needs (per event) will grow with track multiplicity (pileup) Storage needs are proportional to accumulated luminosity Run1 Run2 Run3 ALICE + LHCb Run4 ATLAS + CMS 1. Resource estimates 2. The probable evolution of the distributed computed technologies Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 2
LHCb & ALICE @ Run 3 40 MHz 40 MHz 50 khz 5-40 MHz Reconstruction + Compression 20 khz (0.1 MB/event) Storage 50 khz (1.5 MB/event) 2 GB/s PEAK OUTPUT 75 GB/s Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 3
ATLAS & CMS @ Run 4 Level 1 Level 1 HLT Storage 5-10 khz (2MB/event) HLT Storage 10 khz (4MB/event) 10-20 GB/s PEAK OUTPUT 40 GB/s Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 4
access rate Big Data standard data processing applicable BIG DATA size Data that exceeds the boundaries and sizes of normal processing capabilities, forcing you to take a non-traditional approach Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 5
How do we score today? Tweeter Stock database Library of Congres Digital collection RAW + Derived data @ Run1 Climatic Data Center database LHC raw data per year YouTube videos per year Digital Health records Google index Facebook new content per year 0 20 40 60 80 100 120 140 160 180 200 PB Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 6
PB Data: Outlook for HL-LHC 450.0 400.0 350.0 300.0 250.0 200.0 150.0 CMS ATLAS ALICE LHCb 100.0 50.0 0.0 Run 1 Run 2 Run 3 Run 4 Very rough estimate of a new RAW data per year of running using a simple extrapolation of current data volume scaled by the output rates. To be added: derived data (ESD, AOD), simulation, user data Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 7
Digital Universe Expansion Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 8
Data storage issues Our data problems may still look small on the scale of storage needs of internet giants Business e-mail, video, music, smartphones, digital cameras generate more and more need for storage The cost of storage will probably continue to go down but Commodity high capacity disks may start to look more like tapes, optimized for multimedia storage, sequential access Need to be combined with flash memory disks for fast random access The residual cost of disk servers will remain While we might be able to write all this data, how long it will take to read it back? Need for sophisticated parallel I/O and processing. + We have to store this amount of data every year and for many years to come (Long Term Data Preservation ) Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 9
Exabyte Scale We are heading towards Exabyte scale Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 10
CPU: Online requirements ALICE & LHCb @ Run 3,4 HLT farms for online reconstruction/compression or event rejections with aim to reduce data volume to manageable levels Event rate x 100, data volume x 10 ALICE estimate: 200k today s core equivalent, 2M HEP-SPEC-06 LHCb estimate: 880k today s core equivalent, 8M HEP-SPEC-06 Looking into alternative platforms to optimize performance and minimize cost GPUs ( 1 GPU = 3 CPUs for ALICE online tracking use case) ARM, Atom ATLAS & CMS @ Run 4 Trigger rate x 10, CMS 4MB/event, ATLAS 2MB/event CMS: Assuming linear scaling with pileup, a total factor of 50 increase (10M HEP-SPEC-06) of HLT power would be needed wrt. today s farm Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 11
MHS06 160 CPU: Online + Offline Moore s law limit 140 120 100 80 GRID ATLAS CMS LHCb ALICE Room for improvement GRID 60 40 Historical growth of 25%/year 20 ONLINE 0 Run 1 Run 2 Run 3 Run 4 Very rough estimate of new CPU requirements for online and offline processing per year of data taking using a simple extrapolation of current requirements scaled by the number of events. Little headroom left, we must work on improving the performance. Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 12
How to improve the performance? Clock frequency Vectors Instruction Pipelining Instruction Level Parallelism (ILP) Hardware threading Multi-core Multi-socket Multi-node } Very little gain to be expected and no action to be taken Potential gain in throughput and in time-to-finish Gain in memory footprint and time-to-finish but not in throughput NEXT TALK Improving the algorithms is the only way to reclaim factors in performance! Running independent jobs per core (as we do now) is optimal solution for High Throughput Computing applications THIS TALK Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 13
WLCG Collaboration Distributed infrastructure of 150 computing centers in 40 countries 300+ k CPU cores (~ 2M HEP-SPEC-06) The biggest site with ~50k CPU cores, 12 T2 with 2-30k CPU cores Distributed data, services and operation infrastructure Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 14
Distributed computing today LHCb Running millions of jobs and processing hundreds of PB of data Efficiency (CPU/Wall time) from 70% (analysis) to 85% (organized activities, reconstruction, simulation) 60-70% of all CPU time on the grid is spent in running simulation Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 15
Can it scale? GlideinWMS LHCb Each experiment today runs their own Grid overlay on top of shared WLCG infrastructure In all cases the underlying distributed computing architecture is similar based on pilot job model Can we scale these systems expected levels? Lots of commonality and potential for consolidation Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 16
Grid Tier model http://cerncourier.com/cws/article/cern/52744 Network capabilities and data access technologies significantly improved our ability to use resources independent of location Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 17
FAX Deployment Global Data Federation B. Bockelman, CHEP 2012 FAX Federated ATLAS Xrootd, AAA Any Data, Any Time, AX Anywhere is a 15PB (CMS), federation, AliEn (ALICE) including ATLAS T3s and multip layers of hierarchy. Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 18
Virtualization Virtualization provides better system utilization by putting all those many cores to work, resulting in power & cost savings. Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 19
Software integration problem Application Libraries Traditional model Horizontal layers Independently developed Maintained by the different groups Different lifecycle Application is deployed on top of the stack Breaks if any layer changes Needs to be certified every time when something changes Results in deployment and support nightmare Difficult to do upgrades Even worse to switch to new OS versions Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 20
Decoupling Apps and Ops Application Libraries Tools Databases OS Application driven approach 1. Start by analysing the application requirements and dependencies 2. Add required tools and libraries Use virtualization to 1. Build minimal OS 2. Bundle all this into Virtual Machine image Separates lifecycles of the application and underlying computing infrastructure Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 21
Cloud: Putting it all together Server consolidation using virtualization improves resource usage API IaaS = Infrastructure as a Service Public, on demand, pay as you go infrastructure with goals and capabilities similar to those of academic grids Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 22
Public Cloud At present 9 large sites/zones up to ~2M CPU cores/site, ~4M total 10 x more cores on 1/10 of the sites compared to our Grid 100 x more users (500,000) Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 23
Could we make our own Cloud? Open Source Cloud components Open Source Cloud middleware VM building tools and infrastructure CernVM+CernVM/FS, boxgrinder.. Common API From the Grid heritage Common Authentication/Authorization High performance global data federation/storage Scheduled file transfer Experiment distributed computing frameworks already adapted to Cloud To do Unified access and cross Cloud scheduling Centralized key services at a few large centers Reduce complexity in terms of number of sites Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 24
Grid on top of Clouds Private CERN Cloud Analysis Reconstruction Custodial Data Store AliEn CAF On Demand Reconstruction Calibration Data Store T1/T2/T3 AliEn O 2 Online-Offline Facility HLT DAQ AliEn CAF On Demand Simulation Analysis Simulation Data Cache Vision Public Cloud(s) Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 25 AliEn
And if this is not enough 18,688 nodes with total of 299,008 processor cores and one GPU per node. And, there are spare CPU cycles available Ideal for event generators/simulation type workloads (60-70% of all CPU cycles used by the experiments). Simulation frameworks must be adapted to efficiently use such resources where available Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 26
Conclusions Resources needed for Computing at HL-LHC are going to be large but not unprecedented Data volume is going to grow dramatically Projected CPU needs are reaching the Moore s law ceiling Grid growth of 25% per year is by far not sufficient Speeding up the simulation is very important Parallelization at all levels is needed for improving performance (application, I/O) Requires reengineering of experiment software New technologies such as clouds and virtualization may help to reduce the complexity and help to fully utilize available resources Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 27