PLGrid Programme: IT Platforms and Domain-Specific Solutions Developed for the National Grid Infrastructure for Polish Science

Similar documents
PLGrid Infrastructure Solutions For Computational Chemistry

Grid Activities in Poland

Development of parallel codes using PL-Grid infrastructure.

PIONIER the national fibre optic network for new generation services Artur Binczewski, Maciej Stroiński Poznań Supercomputing and Networking Center

Estonian Scientific Computing Infrastructure (ETAIS)

LHC GRID computing in Poland

DataNet Flexible Metadata Overlay over File Resources

Software services competence in research and development activities at PSNC. Cezary Mazurek PSNC, Poland

Cloud services in PL-Grid and EGI Infrastructures

Poland. networking, digital divide andgridprojects. M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland

GridSpace2 Towards Science-as-a-Service Model

Supercomputing Resources in BSC, RES and PRACE

Auto-administration of glite-based

Cosmological simulations on High Performance Computers

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Advanced Service Platform for e-science. Robert Pękal, Maciej Stroiński, Jan Węglarz (PSNC PL)

Advancements in Storage QoS Management in National Data Storage

HPC-related R&D in 863 Program

Development and Execution of Collaborative Application on the ViroLab Virtual Laboratory

A National Computing Grid: FGI

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

How Cineca supports IT

Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner

TELEMEDICINE in POLAND

HPC Infrastructure Development in Bulgaria

AUTOMATIC PROXY GENERATION AND LOAD-BALANCING-BASED DYNAMIC CHOICE OF SERVICES

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

e-infrastructure and related projects in PSNC

Kriterien für ein PetaFlop System

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

1 DCSC/AU: HUGE. DeIC Sekretariat /RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations

SARA Computing & Networking Services

High-Performance Computing and Big Data Challenge

HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA

BSC - Barcelona Supercomputer Center

Book of Abstracts CS3 Workshop

Overview of HPC systems and software available within

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers

Keywords: Virtualization, resource management, repositories, cloud infrastructure

Massively Self-Scalable Platform for Data Farming

David Vicente Head of User Support BSC

Parallel Computing. Introduction

HPC technology and future architecture

Recent Advances in HPC for Structural Mechanics Simulations

CEDA Storage. Dr Matt Pritchard. Centre for Environmental Data Archival (CEDA)

Clusters: Mainstream Technology for CAE

SOSCIP Platforms. SOSCIP Platforms

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

10- High Performance Compu5ng

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

Scientific and Technical Applications as a Service in the Cloud

SURFsara HPC Cloud Workshop

Data Sharing Options for Scientific Workflows on Amazon EC2


HPC Update: Engagement Model

Grids Computing and Collaboration

Bob Jones Technical Director

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

Sun Constellation System: The Open Petascale Computing Architecture

Trials community. Yannick Legré. EGI InSPIRE RI

Parallel Programming Survey

Deploying and managing a Visualization Onera

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2015

Managing Complexity in Distributed Data Life Cycles Enhancing Scientific Discovery

FPGA Acceleration using OpenCL & PCIe Accelerators MEW 25

Transcription:

1 PL-Grid: Polish Infrastructure for Supporting Computational Science in the European Research Space PLGrid Programme: IT Platforms and Domain-Specific Solutions Developed for the National Grid Infrastructure for Polish Science Jacek Kitowski and Marcin Radecki ACK Cyfronet AGH Consortium PL-Grid VPH Share Meeting, Cracow, 29.5.2015

Outline 2 PLGrid Programme in a nutshell Family of the Projects: PL-Grid, PLGrid Plus, PLGrid NG, PLGrid Core Achievements Selected domain solutions Conclusions

Past and Present Involvement in European Projects 6WINIT (IST-2000-25153) (2001-2003) IPv6 Wireless Internet Initiative CROSSGRID (IST-2001-32243), coordinator (2002-2005) ca. 20 partners Development of Grid Environments for Interactive Applications PELLUCID (IST-2001-34519) (2002-2004) A Platform for Organizationally Mobile Public Employees GridStart (IST-2001-34808) (2002-2005) Grid dissemination, standarisation, applications, roadmap Pro-Access (IST-2001-38626) (2002-2004) ImPROving ACCESS of Associated States To Advanced Concepts In Medical Informatics EGEE I/II/III (2004-2006-2008) Enabling Grids for e-science in Europe (EU 6FP) K-WfGrid (511385) (2004-2007) Knowledge-based Workflow System for Grid Applications CoreGrid (IST 004265) (2004-2008) European Research Network... Virolab (027446) (2006-2009) A virtual laboratory for decision support in viral diseases treatment Gredia (FP6-34363) (2006-2008) Grid enabled access to rich media content Int.eu.grid (FP6-031857) (2006-2008) Interactive European Grid European Structural Funds (Innovative Economy) Family of PLGrid Projects (POIG.02.03) (2009-2015) POWIEW (POIG.02.03) (2010-2012) Grand Challenges Computation PLATON (POIG.02.03) (2008-2012) Platform for Scientific Services gslm (FP7 261547) (2010-2012), FedSM (2012-15) Service Delivery and Service Management in Grid Infrastructures and Federated Infrastructures MAPPER (FP7 261507) (2010-2013) Multiscale Applications on European e-infrastructures UrbanFlood (FP7 248767) (2010-2013) Real-time Emergency Management VPH-Share (FP7 269978) (2011-2015) Virtual Physiological Human EUSAS (EDA A-0676-RT-GC) (2010-2013) European Urban Simulation for Asymmetric Scenarios EGI InSPIRE (2010-14) IS-EPOS (2013-2015) Digital Research Space of Induced Seismity for EPOS CTA Collaboration VirtROLL (RFCS-CT-2013-00007) (2013-2016) Virtual Strip Rolling Mill PRACE 1, 2, EGI Engage (2015- ) Indigo DataGrid (2015- )

TOP500 Polish Sites Rank Site System Cores 81 106 113 145 175 211 143 170 221 275 342 163 -- ---- 194 - ---- 375 -- ---- Cyfronet Poland ICM Warsaw Poland TASK Gdańsk WCSS Wroclaw PCSS Poland June 2011 Nov. 2012 June 2013 Nov. 2013 June 2014 Nov. 2014 Zeus - Cluster Platform SL390/BL2x220, Xeon X5650 6C, 2.660GHz, Infiniband QDR, NVIDIA 2090 Hewlett-Packard BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect IBM GALERA PLUS -- Action Xeon HP BL 2x220/BL490 E5345/L5640 Infiniband ACTION Cluster Platform 3000 BL2x220, X56xx, 2.66 GHz, Infiniband Hewlett-Packard Rackable C1103-G15, Opteron 6234 12C 2.40 GHz, Infiniband QDR SGI 11,694 23,932 25,468 25,468 25,468 25,468 16,384 16,384 16,384 16,384 16,384 10,384 - ---- 6,348 -- -- 9,498 --- -- Rmax (TFlop/s) 104.8 234.3 266.9 266.9 266.9 266.9 172.7 189.0 189.0 189.0 189.0 65.6 - ---- 57.4 -- ---- 89.8 -- -- - Rpeak (TFlop/s) 124.4 357.5 373.9 373.9 373.9 373.9 209.7 209.7 209.7 209.7 209.7 97.8 -- -- 67.5 - ---- 211.1 -- -- 4 Allegro 2011-13 Nasza Klasa 2008, 2010-11 Telecomm. Company 2008, 2010

PL-Grid Consortium 5 Development based on: European Regional Development Fund as part of the Innovative Economy Program Polish scientific communities: ~75% highly rated Polish publications in 5 Communities close international collaboration (EGI,.) previous projects (5FP, 6FP, 7FP, EDA ) National Network Infrastructure available: Pionier National Project computing resources: Top500 list PL-Grid Consortium members: 5 High Performance Computing Polish Centres, representing the Communities, coordinated by ACC Cyfronet AGH

6 Implementation of PL-Grid Programme adopted by the Consortium since January 2007 Family of Projects by Operational Programme: Innovative Economy

Family of PL-Grid Projects coordinated by Cyfronet 7 PL-Grid (2009 2012) Number of people involved: ca. 80 (total, from different Polish Centres) Outcome: Common base infrastructure National Grid Infrastructure (NGI_PL) Resources: 230 Tflops, 3.6 PB PLGrid NG (2014 2015) Expected outcome: Optimization of resources usage, training Extension of domain specific solutions by 14 add l domains Extension of resources and services by: ca. 8 Tflops, some PB PLGrid PLUS (2011 2015) Number of people involved: ca. 120 Outcome: Focus on users (training, helpdesk ) Domain specific solutions: 13 domains (Specific computing environments) Extension of resources and services by: 500 Tflops, 4.4 PB PLGrid CORE (2014 2015) (Cyfronet only) Expected outcome: Competence Center End-user services Open Science paradigm Large workflow applications Data Farming mass computation Extension of resources and services by: ca. 1500 Tflops, 25 PB

Family of PL-Grid Projects coordinated by Cyfronet PL-Grid (2009 2012) Number of people involved: ca. 80 (total, from different Polish Centres) Outcome: Common base infrastructure National Grid Infrastructure (NGI_PL) Resources: 230 Tflops, 3.6 PB Real Users PLGrid NG (2014 2015) Expected outcome: Optimization of resources usage, training Extension of domain specific solutions by 14 add l domains Extension of resources and services by: ca. 8 Tflops, some PB 8 PLGrid PLUS (2011 2015) Number of people involved: ca. 120 Outcome: Focus on users (training, helpdesk ) Domain specific solutions: 13 domains (Specific computing environments) Extension of resources and services by: 500 Tflops, 4.4 PB PLGrid CORE (2014 2015) (Cyfronet only) Expected outcome: Competence Center End-user services Open Science paradigm Large workflow applications Data Farming mass computation Extension of resources and services by: ca. 1500 Tflops, 25 PB

Supercomputer Zeus 9 Xeon, 23 TB, 169 TFlops Opteron, 26 TB, 61 TFlops Xeon, 3,6 TB, 136 TFlops Xeon, 6 TB, 8 TFlops ZEUS Statistics 2012 (2013) Users needs taken into account Almost 8 mln jobs 21,000+ daily 80 mln CPU hours 9130 years 800+ active users 100PB+ usage of scratch 2014: (350PB) 7,7 mln jobs The longest job: 76 13 000 days CPU-years The biggest job: Longest 576 cores job: 90 (1024) days Ca. 50% CPU Biggest time for job: multicore 2400 cores jobs

Summary of Projects Results (up-to-date) 10 Close collaboration between Partners and research communities Development of IT PL-Grid Infrastructure and ecosystem Development of tools, environments and middleware services,clouds Integration, HPC, Data intensive, Instruments Development of 27 domain specific solutions

New HPC Asset 11 New Cluster Prometheus Contract signed 20.10.2014 Some data R peak =1658.9 TFlops 1728 servers 41,472 Haswell cores 216 TB RAM (DDR4) 10 PB disks, 180 GB/s HP Apollo 8000 Grand Opening May, 27, 2015 (this week!) Performance (R max =1262.4 TFlops) Contest for graphics for Prometheus: the winning project was chosen from 42 works

Summary of Projects Results (up-to-date) 12 Facilitation of community participation in international collaboration EGI Council, EGI Executive Board FP7 (VPH-Share, RFCS-VirtROLL.) EDA EUSAS EGI-InSPIRE, FedSM, EGI-Engage, INDIGO DataCloud, EPOS, CTA,. Publications 26 papers on PL-Grid Project results Conference papers Journal papers and book chapters PL-Grid (07.2009 03.2012) PLGrid Plus (06.2012 10.2014) PLGrid Core (10.2014) PLGrid NG (10.2014) total 15 77 5 5 103 28 40 0 0 68 36 papers on PLGrid Plus Project results 147 authors, 76 reviewers Total 43 117 5 5 171

Journal Publications (subjective selection) 13 Journal IF Journal IF Journal IF J.Chem.Theor.Phys.Appl. 5.31 Phys.Lett. B 6,019 J.High Energy Phys. 6,22 Astronomy &Astrophys. 4,479 Inorganic Chem. 4,794 J.Org.Chem. 4,638 Optic Lett. 3,179 Appl.Phys.Lett. 3.515 J.Comput.Chem. 3,601 J.Phys.Chem. B 3,377 Soft Matter 4,151 Int.J.Hydrogen Energy 2,93 Physica B 1,133 J.Chem.Phys. 3,122 J.Phys.Chem.Lett. 6,687 Phys.Chem.Chem.Phys. 4,638 Fuel Processing Techn. 3,019 J.Magn. & Magn. Mat. 2,002 Eur.J.Inorg.Chem. 2,965 Chem.Phys.Lett. 1,991 Phys.Rev.B 3,664 Eur.Phys.J. 2,421 Future Gen.Comp.Syst. 2,639 J.Phys.Chem. C 4,835 Crystal Growth & Desing 4,558 Conferences: Cracow Grid Workshop (since 2001) KU KDM (since 2008) Macromolecules 5,927 Astrophys.J.Lett. 5,602 Phys.Rev.Letters 7,728 J.Chem.Theor.Appl. 5,31 Astrophys.J 6,28 Chem.Physics 2,028 Molec.Pharmaceutics 4,787 Eur.J.Pharmacology 2,684 Energy 4,159 Carbon 6,16 J.Biogeography 4,969 Electrochem.Comm. 4,287 J.Magn.&Magn.Mat. 1,892

Summary of Projects Results (up-to-date) 14 4000 3500 # infrastructure users 3322 3558 36593756 3000 2500 2000 1500 all accounts registered 2821 29303018 2709 278328432886 2914 2965 2559 2586 1471 1534 1655 1734 17971814 18641849 1991 2070 2130 2171 2196 2125 22052267 2287 2339 2232 1847 1882 1902 1949 13601376 1333 1351136313961413 1459 infrastructure users 1000 1 000 900 500 227 230 236 238 232 233 236 239 238 241 249 258 266 265 274 274 265 289 313 320 325 333 339 346 349 353 800 0 employees 700 01-2013 04-2013 07-2013 10-2013 01-2014 04-2014 07-2014 10-2014 01-2015 600 500 # Grid users of global services glite 400 300 UNICORE 200 QosCosGrid 100 0

15 Implementation of the PL-Grid Programme Deployed IT Platforms and Tools selected examples (by Cyfronet) GridSpace InSilicoLab Scalarm onedata Cloud computing