PRACE: access to Tier-0 systems and enabling the access to ExaScale systems Dr. Sergi Girona Managing Director and Chair of the PRACE Board of
|
|
- Kenneth Johnson
- 7 years ago
- Views:
Transcription
1 PRACE: access to Tier-0 systems and enabling the access to ExaScale systems Dr. Sergi Girona Managing Director and Chair of the PRACE Board of Directors
2 PRACE aisbl, a persistent pan-european supercomputing infrastructure 25 members 4 hosting members: France, Germany, Italy and Spain Enables world-class science through large scale simulations Offers HPC services on leading edge capability systems Awards its resources through a single and fair pan-european peer review process for open research
3 PRACE History An Overview of a Success Story HPC part of the ESFRI Roadmap; creation of a vision involving 15 European countries Creation of the Scientific Case Signature of the MoU Creation of the PRACE AISBL the RI legal entity HPCEUR HET PRACE Initiative PRACE RI PRACE Preparatory Phase Project PRACE-1IP PRACE-3IP PRACE-4IP PRACE-2IP
4 PRACE at a glance, HPC services in Europe 530 M of funding for the period 25 member states, including 4 Hosting Members (France, Germany, Italy, Spain) 346 scientific projects enabled >9 billion core hours awarded since 2010 with peer review, scientific excellence as the main criteria 15 Pflop/s of peak performance on 6 world-class systems Open R&D access for industrial users 3000 people trained by 6 PRACE Advanced Training centers and others events
5 PRACE s Achievements in 4 years 346 projects and 9.2 thousand million core-hours awarded MareNostrum: IBM BSC, Barcelona, Spain Universe Sciences 21% Mathematics and Computer Sciences 4% BioChemistry, Bioinformatics and Life Sciences 13% Chemical Sciences and Materials 21% JUQUEEN: IBM BlueGene/Q GAUSS/FZJ Jülich, Germany CURIE: Bull Bullx GENCI/CEA Bruyères-le-Châtel, France Fundamental Physics 18% Engineering and Energy 13% Earth System Sciences 10% SuperMUC: IBM GAUSS/LRZ Garching, Germany HERMIT: Cray GAUSS/HLRS, Stuttgart, Germany FERMI: IBM BlueGene/Q CINECA, Bologna, Italy
6 Climate 144 million core hours on Hermit (DE) for UK PRACE will give to UK Met a 3 years advance in the development of high resolution global weather & climate models. Astrophysics Energy Chemistry Seismology Life Sciences 300 million core hours: 200 on CURIE (FR) on SuperMUC (DE) for DE 30 million core hours on SuperMUC (DE) for Finland 59,8 million core hours on JUQUEEN (DE) for Switzerland 53.4 million core hours on SuperMUC (DE) for Italy PRACE resources enable the 1st European comparison of first-principles simulations to multi-scale experimental data for fusion energy The goal: catching CO2 in a solvent, making the exhausts cleaner and reducing the cost of regenerating the solvent by optimizing the regeneration process. 56 million core hours on CURIE (FR) and 82 million core hours on SuperMuc for FR PRACE resources are used to explore the nonlinearity involved in the dependence of local ground shaking on geological structure. Research on supernovae and heavy chemical elements. This PRACE grant is one of the biggest worldwide allocation in this domain. Research on nervous impulses to contribute to the design of drugs that will modulate their activity. 30 times larger than a typical allocation.
7 PRACE peer-review access Free-of-charge, need to publish results at the end of the award period Scientific Excellence Types of resource allocations for scientists Project Access (every 6 months) For a specific project, award period ~ 1 to 3 years For individual researchers and research groups (no restriction of nationality for both researcher and centre) Requires to demonstrate technical feasibility of project Preparatory Access Optionally with support from PRACE experts Prepare proposals for Project Access
8 Preparatory Access Permanently open with quarterly cut-off dates (03/06/09/12) Intended to prepare proposals for Project Access Open for scientists, researchers or industrials Not open for production runs nor research activities Testing scalability: Type A, allocation for 2 months Code development or optimisation: Type B, allocation for 6 months Type C, allocation for 6 months, with PRACE experts support Fixed amount of resources, depending on the system A final report is required by the end of the allocation
9 Fast allocations - Technical review only Evaluation process lasts for 6 working weeks only Eligibility check + applicants corrections Technical review Evaluation and BoD decision Letters sent to applicants approx. 1 month after the cut off date Start date of awarded projects approx. 45 days after the cut-off date Preparatory Access type C awarded 19 in 2011, 14 in 2012, 40 in 2013*, 8 in 2014
10 Duration and awarded Resources per type of Preparatory Access (2014) Type A 2 months Type B/C 6 months Curie FN/TN Curie H CPU GPU CPU GPU Hermit CPU CPU FERMI CPU CPU JUQUEEN CPU CPU MareNostrum MareNostrum H CPU MIC CPU MIC SuperMUC CPU CPU
11 Scientific achievements: examples using GPU " DEVELOPING NANO DEVICES Exploring structural properties of new materials for next gen transistors QUAntum SImula.on of ul.mate NanO- devices Use of the TB_Sim code on up to 288 GPUs Fortran/C,MPI+ CUDA, use of cublas & Magma Up to 16x gain using GPUs vs standard x million GPU hours! on CURIE hybrid (France)!!! 12/09/2014 UNDERSTANDING OUR UNIVERSE Assess the crea.on of magne.c fields generated by self excited dynamos Magnetohydrodynamics Parody code, MPI + CUDA Up to 30x gain using CPUs vs standard x million GPU hours! on CURIE hybrid (France)!!
12 Scientific achievements: examples using GPU " UNDERSTANDING EXTREME CLIMATE EVENTS TWISTER : Large- Eddy Simula.on of tornado forma.on by a supercell (thunderstorm) learn more about the inner (thermo- )dynamical structure of tornadoes GALES code with a solver fully resident on GPU C++, MPI/CUDA with use of CUFFT Use of a preparatory access for early tuning on CURIE 0.5 M GPU hours equivalent to 100M core hours on BG/P 0.5 million GPU hours! on CURIE hybrid (France)!!!
13 Scientific achievements: preparatory access using MIC Scalability of turbulent jet simulations Carlos B. da Silva, Instituto Superior Técnico, Lisboa Engineering and Energy hour, type A Performance and accuracy of the linear-scaling DFT method applied to a complex metal oxide surface Ruben Perez, Universidad Autónoma de Madrid Chemistry and Materials hour, type A Atomistic simulations of heterogeneous media on the Intel Xeon Phi Daniele Coslovich, Universite Montpellier 2, France Fundamental Physics hour, type B Aerodynamic characterization of swirling flows in combustors Teresa Parra, Universidad de Valladolid Engineering and Energy hour, type B High performance computation for short read alignment Paul Walsh, NSilico Life Science Ltd, Cork, Ireland Medicine and Life Sciences hours, type C 13
14 PRACE GPU and MIC code enabling Supported by PRACE xip projects Benefited from support of PRACE experts in EU 25 countries Code enabling activities Work between PRACE experts and single code owners to communities Code enabling from kernel to a full application Duration between 6 months (Preparatory Access type C) to 2-3 years (code enabling on community codes : climate, astrophysics, materials, ) New specific activity on Intel MIC (23 applications tested/enabled) Continuous Evaluation of programming models : RapidMind, CUDA, OpenACC, OpenHMPP, OmpSs, OpenCL, OpenMP, Assessment of heterogeneous and autotuned runtime systems : Nanos++, StarPU, All materials (deliverables, white papers) free available :
15 PRACE Training on GPU and manycore 6 PATC (PRACE Advanced Training Centers) established Between Aug 2008 to March 2014 : 9079 person-days training delivered Benefiting to 2874 people in Europe 71 different courses in 2013, similar activity forecasted in 2014 Specific trainings sessions on GPU and manycore 04 Jun - 05 Jun : Programming the Xeon EPCC 02 Jun - 06 Jun : Introduction to CUDA BSC October 2014: Introduction to CSC December 2014: Programming on May: Heterogeneous Programming on GPUs with MPI + BSC Information and materials available :
16 PRACE prototyping on GPU and manycore PRACE xip assessment of technologies for future multi- Petaflop/s systems 22 Prototypes: CPUs, Accelerators, I/O, Cooling, Programming models focus on energy efficiency Benefits For PRACE Partners: gain insight for future procurements For technology providers: get feedback on usability and user requirements For Users: get early access to future architectures For the HPC Community : get access to all results published as white papers è useful as inputs for some Tier1 or Tier2 All materials (deliverables, white papers) free available :
17 PRACE Prototyping Brief History PRACE Preparatory Phase ( ) Prototypes for the upcoming Tier-0 system [4.4 M, 5 prototypes] Prototypes for multi-petaflop/s technologies [2.4 M, 8 prototypes] PRACE-1IP ( ) Systems and components for future highly energy efficient multi- Petascale system architectures [4.8 M, 8 prototypes] PRACE-2IP ( ) Complement PRACE-1IP prototypes (new accelerators and direct liquid cooling) [2.2 M, 3 prototypes] PRACE-3IP ( ) PCP on Whole system design for energy efficient HPC 9 M for 5 development contracts leading to at least 2 prototypes 12/09/2014
18 Software and Scalability PRACE -1 IP PRACE PP GPGPU virtualization SGI UV, ICE Compute Node Alternatives Hybrid CPU/ GPU ARM Accelerators and programming models (Cell, GPU, Clearspeed, LRB) FPGA and DSP Power Consumption Direct Liquid Free Cooling SSD Tier-0 Exascale Challenge NUMA-CIC Exascale I/O MPP I/O I/O and Memory Resilience
19 Software and Scalability PRACE-1P and 2IP 2012 Compute Node Alternatives EURORA Scalable Hybrid CPU/GPU Power Consumption SHAVE- PRACE ARM +GPU PRACE - 1 IP GPGPU virtualization Hybrid CPU/GPU ARM Direct Liquid Free Cooling GPU SSD AMFT Tier-0 Exascale Challenge I/O and Memory Resilience
20 EoI Big Data 33 proposals from 8 scientific domains Earth Sciences and Universe Sciences have the biggest need for long-term access to data and extensive computing resources Storage need in general: Tb, but a few projects reach the Pb level. 5 years access to stored data. First with embargo on data, later on Open Access for the Scientific community A PRACE Big Data policy would help to deploy the full scientific potential of Tier-0 simulations
21 Centres of Excellence for computing applications Provision of services such as: developing, optimising (including if needed re-design) and scaling HPC application codes towards Peta and ExaScale computing; Working in synergy with the pan-european HPC infrastructure, including by identifying suitable applications for co-design activities relevant to the development of HPC technologies towards ExaScale. H2020 WP EINFRA Centres of Excellence for computing applications
22 Summary of PRACE offer HPC services on leading edge capability systems Preparatory access for code enabling Training, including new technologies Prototyping new technologies Big Data policy for HPC users Cooperation with stakeholders Centres of Excellence
23 Thanks for your attention If you have any further questions, don t hesitate to contact me. director@prace-ri.eu
Partnership for Advanced Computing in Europe
www.prace-ri.eu Partnership for Advanced Computing in Europe Symposium on European Funding Instruments for the development of Research Infrastructures Madrid, 19 April 2016 Dr. Sergi Girona sergi.girona@bsc.es
More informationPRACE the European HPC Research Infrastructure. Carlos Mérida-Campos, Advisor of Spanish Member at PRACE Council
PRACE the European HPC Research Infrastructure Carlos Mérida-Campos, Advisor of Spanish Member at PRACE Council Barcelona, 6-June-2013 PRACE an European e-infrastructure & ESFRI-list item in operation
More informationSupercomputing Resources in BSC, RES and PRACE
www.bsc.es Supercomputing Resources in BSC, RES and PRACE Sergi Girona, BSC-CNS Barcelona, 23 Septiembre 2015 ICTS 2014, un paso adelante para la RES Past RES members and resources BSC-CNS (MareNostrum)
More informationPRACE hardware, software and services. David Henty, EPCC, d.henty@epcc.ed.ac.uk
PRACE hardware, software and services David Henty, EPCC, d.henty@epcc.ed.ac.uk Why? Weather, Climatology, Earth Science degree of warming, scenarios for our future climate. understand and predict ocean
More informationInternational High Performance Computing. Troels Haugbølle Centre for Star and Planet Formation Niels Bohr Institute PRACE User Forum
International High Performance Computing Troels Haugbølle Centre for Star and Planet Formation Niels Bohr Institute PRACE User Forum Why International HPC? Large-scale science projects can require resources
More informationInformation about Pan-European HPC infrastructure PRACE. Vít Vondrák IT4Innovations
Information about Pan-European HPC infrastructure PRACE Vít Vondrák IT4Innovations Realizing the ESFRI Vision for an HPC RI European HPC- facilities at the top of an HPC provisioning pyramid Tier- 0: European
More informationPRACE An Introduction Tim Stitt PhD. CSCS, Switzerland
PRACE An Introduction Tim Stitt PhD. CSCS, Switzerland High Performance Computing A Key Technology 1. Supercomputing is the tool for solving the most challenging problems through simulations; 2. Access
More informationCosmological simulations on High Performance Computers
Cosmological simulations on High Performance Computers Cosmic Web Morphology and Topology Cosmological workshop meeting Warsaw, 12-17 July 2011 Maciej Cytowski Interdisciplinary Centre for Mathematical
More informationPRACE: World Class HPC Services for Science
PRACE: World Class HPC Services for Science F. Berberich, Forschungszentrum Jülich, February 2012, PRACE Workshop on HPC Approaches on Life Sciences and Chemistry, Sofia, Bulgaria Overview PRACE AS A RESEARCH
More informationPRACE in building the HPC Ecosystem Kimmo Koski, CSC
PRACE in building the HPC Ecosystem Kimmo Koski, CSC 1 Petaflop computing First Steps and Achievements Production of the HPC part of the ESFRI Roadmap; Creation of a vision, involving 15 European countries
More informationJean-Pierre Panziera Teratec 2011
Technologies for the future HPC systems Jean-Pierre Panziera Teratec 2011 3 petaflop systems : TERA 100, CURIE & IFERC Tera100 Curie IFERC 1.25 PetaFlops 256 TB ory 30 PB disk storage 140 000+ Xeon cores
More informationRelations with ISV and Open Source. Stephane Requena GENCI Stephane.requena@genci.fr
Relations with ISV and Open Source Stephane Requena GENCI Stephane.requena@genci.fr Agenda of this session 09:15 09:30 Prof. Hrvoje Jasak: Director, Wikki Ltd. «HPC Deployment of OpenFOAM in an Industrial
More informationEvoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Carlo Cavazzoni CINECA Supercomputing Application & Innovation www.cineca.it 21 Aprile 2015 FERMI Name: Fermi Architecture: BlueGene/Q
More informationKriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
More informationRecent and Future Activities in HPC and Scientific Data Management Siegfried Benkner
Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner Research Group Scientific Computing Faculty of Computer Science University of Vienna AUSTRIA http://www.par.univie.ac.at
More informationPedraforca: ARM + GPU prototype
www.bsc.es Pedraforca: ARM + GPU prototype Filippo Mantovani Workshop on exascale and PRACE prototypes Barcelona, 20 May 2014 Overview Goals: Test the performance, scalability, and energy efficiency of
More informationExtreme Scaling on Energy Efficient SuperMUC
Extreme Scaling on Energy Efficient SuperMUC Dieter Kranzlmüller Munich Network Management Team Ludwig- Maximilians- Universität München (LMU) & Leibniz SupercompuFng Centre (LRZ) of the Bavarian Academy
More informationHow To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
More informationChallenges on Extreme Scale Computers - Complexity, Energy, Reliability
Challenges on Extreme Scale Computers - Complexity, Energy, Reliability Dieter Kranzlmüller Munich Network Management Team Ludwig-Maximilians-Universität München (LMU) & Leibniz SupercompuFng Centre (LRZ)
More informationAccess, Documentation and Service Desk. Anupam Karmakar / Application Support Group / Astro Lab
Access, Documentation and Service Desk Anupam Karmakar / Application Support Group / Astro Lab Time to get answer to these questions Who is allowed to use LRZ hardware? My file system is full. How can
More informationUnleashing the Performance Potential of GPUs for Atmospheric Dynamic Solvers
Unleashing the Performance Potential of GPUs for Atmospheric Dynamic Solvers Haohuan Fu haohuan@tsinghua.edu.cn High Performance Geo-Computing (HPGC) Group Center for Earth System Science Tsinghua University
More informationWelcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich
Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 19 13:00-13:30 Welcome
More informationDavid Vicente Head of User Support BSC
www.bsc.es Programming MareNostrum III David Vicente Head of User Support BSC Agenda WEDNESDAY - 17-04-13 9:00 Introduction to BSC, PRACE PATC and this training 9:30 New MareNostrum III the views from
More informationAccelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools
More informationGPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
More informationHIGH PERFORMANCE CONSULTING COURSE OFFERINGS
Performance 1(6) HIGH PERFORMANCE CONSULTING COURSE OFFERINGS LEARN TO TAKE ADVANTAGE OF POWERFUL GPU BASED ACCELERATOR TECHNOLOGY TODAY 2006 2013 Nvidia GPUs Intel CPUs CONTENTS Acronyms and Terminology...
More informationResource Scheduling Best Practice in Hybrid Clusters
Available online at www.prace-ri.eu Partnership for Advanced Computing in Europe Resource Scheduling Best Practice in Hybrid Clusters C. Cavazzoni a, A. Federico b, D. Galetti a, G. Morelli b, A. Pieretti
More information2IP WP8 Materiel Science Activity report March 6, 2013
2IP WP8 Materiel Science Activity report March 6, 2013 Codes involved in this task ABINIT (M.Torrent) Quantum ESPRESSO (F. Affinito) YAMBO + Octopus (F. Nogueira) SIESTA (G. Huhs) EXCITING/ELK (A. Kozhevnikov)
More informationEnergy efficient computing on Embedded and Mobile devices. Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez
Energy efficient computing on Embedded and Mobile devices Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez A brief look at the (outdated) Top500 list Most systems are built
More informationExtreme Scale Compu0ng at LRZ
Extreme Scale Compu0ng at LRZ Dieter Kranzlmüller Munich Network Management Team Ludwig- Maximilians- Universität München (LMU) & Leibniz SupercompuFng Centre (LRZ) of the Bavarian Academy of Sciences
More informationDesign and Optimization of a Portable Lattice Boltzmann Code for Heterogeneous Architectures
Design and Optimization of a Portable Lattice Boltzmann Code for Heterogeneous Architectures E Calore, S F Schifano, R Tripiccione Enrico Calore INFN Ferrara, Italy Perspectives of GPU Computing in Physics
More informationIMPORTANT PROJECT OF COMMON EUROPEAN INTEREST (IPCEI)
IMPORTANT PROJECT OF COMMON EUROPEAN INTEREST (IPCEI) ON HIGH PERFORMANCE COMPUTING AND BIG DATA ENABLED APPLICATIONS (IPCEI-HPC-BDA) European Strategic Positioning Paper Luxembourg, France, Italy (& Spain)
More informationDefying the Laws of Physics in/with HPC. Rafa Grimán HPC Architect
Defying the Laws of Physics in/with HPC 2013 11 12 Rafa Grimán HPC Architect 1 Agenda Bull Scalability ExaFLOP / Exascale Bull s PoV? Bar 2 Bull 3 Mastering Value Chain for Critical Processes From infrastructures
More informationInterconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, June 2016
Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, June 2016 Mellanox Leadership in High Performance Computing Most Deployed Interconnect in High Performance
More informationThe PRACE Project Applications, Benchmarks and Prototypes. Dr. Peter Michielse (NCF, Netherlands)
The PRACE Project Applications, Benchmarks and Prototypes Dr. Peter Michielse (NCF, Netherlands) Introduction to me Ph.D. in numerical mathematics (parallel adaptive multigrid solvers) from Delft University
More informationAutomatic Tuning of HPC Applications for Performance and Energy Efficiency. Michael Gerndt Technische Universität München
Automatic Tuning of HPC Applications for Performance and Energy Efficiency. Michael Gerndt Technische Universität München SuperMUC: 3 Petaflops (3*10 15 =quadrillion), 3 MW 2 TOP 500 List TOTAL #1 #500
More informationDr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory. The Nation s Premier Laboratory for Land Forces UNCLASSIFIED
Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory 21 st Century Research Continuum Theory Theory embodied in computation Hypotheses tested through experiment SCIENTIFIC METHODS
More informationInterconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2015
Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, November 2015 InfiniBand FDR and EDR Continue Growth and Leadership The Most Used Interconnect On The TOP500
More informationParallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
More informationPRACE-3IP PCP: A journey to the Energy Efficient HPC Dr. Piero Altoè, E4 Computer Engineering
PRACE-3IP PCP: A journey to the Energy Efficient HPC Dr. Piero Altoè, E4 Computer Engineering 1 E4 Computer Engineering S.p.A. specializes in the manufacturing of high performance IT systems of medium
More informationRegional Vision and Strategy The European HPC Strategy
Regional Vision and Strategy The European HPC Strategy BDEC 29 January 2015 Augusto BURGUEÑO ARJONA Head of Unit einfrastructure - DG CONNECT European Commission @ABurguenoEU aburguenoeu.wordpress.com
More informationMicrosoft Research Worldwide Presence
Microsoft Research Worldwide Presence MSR India MSR New England Redmond Redmond, Washington Sept, 1991 San Francisco, California Jun, 1995 Cambridge, United Kingdom July, 1997 Beijing, China Nov, 1998
More informationRES is a distributed infrastructure of Spanish HPC systems. The objective is to provide a unique service to HPC users in Spain
RES: Red Española de Supercomputación, Spanish Supercomputing Network RES is a distributed infrastructure of Spanish HPC systems The objective is to provide a unique service to HPC users in Spain Services
More informationHow Cineca supports IT
How Cineca supports IT Topics CINECA: an overview Systems and Services for Higher Education HPC for Research Activities and Industries Cineca: the Consortium Not For Profit Founded in 1969 HPC FERMI: TOP500
More informationCarlo Cavazzoni, HPC department, CINECA www.cineca.it
CINECA HPC Infrastructure: state of the art and road map Carlo Cavazzoni, HPC department, CINECA www.cineca.it Installed HPC Engines Eurora (Eurotech) FERMI, (IBM BGQ) PLX, (IBM DataPlex) hybrid cluster
More informationCase Study on Productivity and Performance of GPGPUs
Case Study on Productivity and Performance of GPGPUs Sandra Wienke wienke@rz.rwth-aachen.de ZKI Arbeitskreis Supercomputing April 2012 Rechen- und Kommunikationszentrum (RZ) RWTH GPU-Cluster 56 Nvidia
More informationWorkprogramme 2014-15
Workprogramme 2014-15 e-infrastructures DCH-RP final conference 22 September 2014 Wim Jansen einfrastructure DG CONNECT European Commission DEVELOPMENT AND DEPLOYMENT OF E-INFRASTRUCTURES AND SERVICES
More informationData Intensive Science and Computing
DEFENSE LABORATORIES ACADEMIA TRANSFORMATIVE SCIENCE Efficient, effective and agile research system INDUSTRY Data Intensive Science and Computing Advanced Computing & Computational Sciences Division University
More informationNVIDIA Tesla K20-K20X GPU Accelerators Benchmarks Application Performance Technical Brief
NVIDIA Tesla K20-K20X GPU Accelerators Benchmarks Application Performance Technical Brief NVIDIA changed the high performance computing (HPC) landscape by introducing its Fermibased GPUs that delivered
More informationJezelf Groen Rekenen met Supercomputers
Jezelf Groen Rekenen met Supercomputers Symposium Groene ICT en duurzaamheid: Nieuwe energie in het hoger onderwijs Walter Lioen Groepsleider Supercomputing About SURFsara SURFsara
More informationTrends in High-Performance Computing for Power Grid Applications
Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views
More informationHPC enabling of OpenFOAM R for CFD applications
HPC enabling of OpenFOAM R for CFD applications Towards the exascale: OpenFOAM perspective Ivan Spisso 25-27 March 2015, Casalecchio di Reno, BOLOGNA. SuperComputing Applications and Innovation Department,
More informationExperiences with Tools at NERSC
Experiences with Tools at NERSC Richard Gerber NERSC User Services Programming weather, climate, and earth- system models on heterogeneous mul>- core pla?orms September 7, 2011 at the Na>onal Center for
More informationICT4 - Customised and low power computing
ICT4 - Customised and low power computing Sandro D'Elia European Commission Directorate-general CONNECT Unit "Complex Systems & Advanced Computing" sandro.delia@ec.europa.eu Excellent Science: HPC Strategy
More informationAppro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
More informationHPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
More informationHPC with Multicore and GPUs
HPC with Multicore and GPUs Stan Tomov Electrical Engineering and Computer Science Department University of Tennessee, Knoxville CS 594 Lecture Notes March 4, 2015 1/18 Outline! Introduction - Hardware
More informationHETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
More informationSC15 SYNOPSIS FOR FEDERAL GOVERNMENT
SC15 SYNOPSIS FOR FEDERAL GOVERNMENT SC15 Synopsis for Federal Government As a service to our clients, Engility offers a few notes from the Supercomputing Conference 2015 (SC15). Engility recently attended
More informationReview of SC13; Look Ahead to HPC in 2014. Addison Snell addison@intersect360.com
Review of SC13; Look Ahead to HPC in 2014 Addison Snell addison@intersect360.com New at Intersect360 Research HPC500 user organization, www.hpc500.com Goal: 500 users worldwide, demographically representative
More informationHP ProLiant SL270s Gen8 Server. Evaluation Report
HP ProLiant SL270s Gen8 Server Evaluation Report Thomas Schoenemeyer, Hussein Harake and Daniel Peter Swiss National Supercomputing Centre (CSCS), Lugano Institute of Geophysics, ETH Zürich schoenemeyer@cscs.ch
More informationData Centric Systems (DCS)
Data Centric Systems (DCS) Architecture and Solutions for High Performance Computing, Big Data and High Performance Analytics High Performance Computing with Data Centric Systems 1 Data Centric Systems
More informationIntroducing PgOpenCL A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child
Introducing A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child Bio Tim Child 35 years experience of software development Formerly VP Oracle Corporation VP BEA Systems Inc.
More informationBarry Bolding, Ph.D. VP, Cray Product Division
Barry Bolding, Ph.D. VP, Cray Product Division 1 Corporate Overview Trends in Supercomputing Types of Supercomputing and Cray s Approach The Cloud The Exascale Challenge Conclusion 2 Slide 3 Seymour Cray
More informationOptimizing a 3D-FWT code in a cluster of CPUs+GPUs
Optimizing a 3D-FWT code in a cluster of CPUs+GPUs Gregorio Bernabé Javier Cuenca Domingo Giménez Universidad de Murcia Scientific Computing and Parallel Programming Group XXIX Simposium Nacional de la
More informationScientific Computing Programming with Parallel Objects
Scientific Computing Programming with Parallel Objects Esteban Meneses, PhD School of Computing, Costa Rica Institute of Technology Parallel Architectures Galore Personal Computing Embedded Computing Moore
More informationScalable and High Performance Computing for Big Data Analytics in Understanding the Human Dynamics in the Mobile Age
Scalable and High Performance Computing for Big Data Analytics in Understanding the Human Dynamics in the Mobile Age Xuan Shi GRA: Bowei Xue University of Arkansas Spatiotemporal Modeling of Human Dynamics
More informationRSC presents SPbPU supercomputer center and new scientific research results achieved with RSC PetaStream massively parallel supercomputer
Press contacts: Oleg Gorbachov Corporate Communications Director, RSC Group Cell: +7 (967) 052-50-85 Email: oleg.gorbachov@rscgroup.ru Press Release RSC presents SPbPU supercomputer center and new scientific
More informationwww.thinkparq.com www.beegfs.com
www.thinkparq.com www.beegfs.com KEY ASPECTS Maximum Flexibility Maximum Scalability BeeGFS supports a wide range of Linux distributions such as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a
More informationEuro-BioImaging European Research Infrastructure for Imaging Technologies in Biological and Biomedical Sciences
Euro-BioImaging European Research Infrastructure for Imaging Technologies in Biological and Biomedical Sciences WP9 Access to Innovative Technologies Medical Imaging Task 9.2: Organization of a European
More informationHPC technology and future architecture
HPC technology and future architecture Visual Analysis for Extremely Large-Scale Scientific Computing KGT2 Internal Meeting INRIA France Benoit Lange benoit.lange@inria.fr Toàn Nguyên toan.nguyen@inria.fr
More informationHigh Performance Computing in Horizon 2020. February 26-28, 2014 Fukuoka Japan
High Performance Computing in Horizon 2020 Big Data and Extreme Scale Computing Workshop 51214 February 26-28, 2014 Fukuoka Japan Excellence in Science DG CONNECT European Commission Jean-Yves Berthou
More informationHPC-related R&D in 863 Program
HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010 Outline The 863 key project on HPC and Grid Status and Next 5 years 863 efforts on
More informationResearch Computing Building Blocks INFRASTRUCTURE FOR DATA AT PURDUE PRESTON SMITH, DIRECTOR OF RESEARCH SERVICES PSMITH@PURDUE.
Research Computing Building Blocks INFRASTRUCTURE FOR DATA AT PURDUE PRESTON SMITH, DIRECTOR OF RESEARCH SERVICES PSMITH@PURDUE.EDU Discussion http://www.geartechnology.com/blog/wp- content/uploads/2015/11/opportunity-
More informationInfiniBand Strengthens Leadership as the High-Speed Interconnect Of Choice
InfiniBand Strengthens Leadership as the High-Speed Interconnect Of Choice Provides the Best Return-on-Investment by Delivering the Highest System Efficiency and Utilization TOP500 Supercomputers June
More informationThe Fusion of Supercomputing and Big Data. Peter Ungaro President & CEO
The Fusion of Supercomputing and Big Data Peter Ungaro President & CEO The Supercomputing Company Supercomputing Big Data Because some great things never change One other thing that hasn t changed. Cray
More informationUsing the Intel Xeon Phi (with the Stampede Supercomputer) ISC 13 Tutorial
Using the Intel Xeon Phi (with the Stampede Supercomputer) ISC 13 Tutorial Bill Barth, Kent Milfeld, Dan Stanzione Tommy Minyard Texas Advanced Computing Center Jim Jeffers, Intel June 2013, Leipzig, Germany
More informationLarge-Scale Reservoir Simulation and Big Data Visualization
Large-Scale Reservoir Simulation and Big Data Visualization Dr. Zhangxing John Chen NSERC/Alberta Innovates Energy Environment Solutions/Foundation CMG Chair Alberta Innovates Technology Future (icore)
More informationSGI High Performance Computing
SGI High Performance Computing Accelerate time to discovery, innovation, and profitability 2014 SGI SGI Company Proprietary 1 Typical Use Cases for SGI HPC Products Large scale-out, distributed memory
More informationAuto-Tuning TRSM with an Asynchronous Task Assignment Model on Multicore, GPU and Coprocessor Systems
Auto-Tuning TRSM with an Asynchronous Task Assignment Model on Multicore, GPU and Coprocessor Systems Murilo Boratto Núcleo de Arquitetura de Computadores e Sistemas Operacionais, Universidade do Estado
More informationECDF Infrastructure Refresh - Requirements Consultation Document
Edinburgh Compute & Data Facility - December 2014 ECDF Infrastructure Refresh - Requirements Consultation Document Introduction In order to sustain the University s central research data and computing
More informationHigh Performance Applications over the Cloud: Gains and Losses
High Performance Applications over the Cloud: Gains and Losses Dr. Leila Ismail Faculty of Information Technology United Arab Emirates University leila@uaeu.ac.ae http://citweb.uaeu.ac.ae/citweb/profile/leila
More informationBig Data Management in the Clouds and HPC Systems
Big Data Management in the Clouds and HPC Systems Hemera Final Evaluation Paris 17 th December 2014 Shadi Ibrahim Shadi.ibrahim@inria.fr Era of Big Data! Source: CNRS Magazine 2013 2 Era of Big Data! Source:
More informationSupercomputer Center Management Challenges. Branislav Jansík
Supercomputer Center Management Challenges Branislav Jansík 2000-2004 PhD at KTH Stockholm Molecular properties for heavy element compounds Density Functional Theory for Molecular Properties 2004-2006
More informationDutch HPC Cloud: flexible HPC for high productivity in science & business
Dutch HPC Cloud: flexible HPC for high productivity in science & business Dr. Axel Berg SARA national HPC & e-science Support Center, Amsterdam, NL April 17, 2012 4 th PRACE Executive Industrial Seminar,
More informationBig Data Visualization on the MIC
Big Data Visualization on the MIC Tim Dykes School of Creative Technologies University of Portsmouth timothy.dykes@port.ac.uk Many-Core Seminar Series 26/02/14 Splotch Team Tim Dykes, University of Portsmouth
More informationOperating System for the K computer
Operating System for the K computer Jun Moroo Masahiko Yamada Takeharu Kato For the K computer to achieve the world s highest performance, Fujitsu has worked on the following three performance improvements
More informationOverview on Modern Accelerators and Programming Paradigms Ivan Giro7o igiro7o@ictp.it
Overview on Modern Accelerators and Programming Paradigms Ivan Giro7o igiro7o@ictp.it Informa(on & Communica(on Technology Sec(on (ICTS) Interna(onal Centre for Theore(cal Physics (ICTP) Mul(ple Socket
More informationAgenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
More informationJUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
More informationHigh Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
More informatione-infrastructures in Horizon 2020 Vision, approach, drivers, policy background, challenges, WP structure INFODAY France Paris, 25 mars 2014
e-infrastructures in Horizon 2020 Vision, approach, drivers, policy background, challenges, WP structure INFODAY France Paris, 25 mars 2014 Jean-Luc Dorel European Commission DG CNECT einfrastructure Vision
More informationKeeneland Enabling Heterogeneous Computing for the Open Science Community Philip C. Roth Oak Ridge National Laboratory
Keeneland Enabling Heterogeneous Computing for the Open Science Community Philip C. Roth Oak Ridge National Laboratory with contributions from the Keeneland project team and partners 2 NSF Office of Cyber
More informationANALYSIS OF SUPERCOMPUTER DESIGN
ANALYSIS OF SUPERCOMPUTER DESIGN CS/ECE 566 Parallel Processing Fall 2011 1 Anh Huy Bui Nilesh Malpekar Vishnu Gajendran AGENDA Brief introduction of supercomputer Supercomputer design concerns and analysis
More informationOpenMP Programming on ScaleMP
OpenMP Programming on ScaleMP Dirk Schmidl schmidl@rz.rwth-aachen.de Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign
More informationPanasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory
Customer Success Story Los Alamos National Laboratory Panasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory June 2010 Highlights First Petaflop Supercomputer
More informationData Centric Interactive Visualization of Very Large Data
Data Centric Interactive Visualization of Very Large Data Bruce D Amora, Senior Technical Staff Gordon Fossum, Advisory Engineer IBM T.J. Watson Research/Data Centric Systems #OpenPOWERSummit Data Centric
More informationEUFORIA: Grid and High Performance Computing at the Service of Fusion Modelling
EUFORIA: Grid and High Performance Computing at the Service of Fusion Modelling Miguel Cárdenas-Montes on behalf of Euforia collaboration Ibergrid 2008 May 12 th 2008 Porto Outline Project Objectives Members
More informationThe Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland
The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which
More informationEmerging storage and HPC technologies to accelerate big data analytics Jerome Gaysse JG Consulting
Emerging storage and HPC technologies to accelerate big data analytics Jerome Gaysse JG Consulting Introduction Big Data Analytics needs: Low latency data access Fast computing Power efficiency Latest
More information