Extreme Scale Compu0ng at LRZ
|
|
|
- Jared Gilbert
- 10 years ago
- Views:
Transcription
1 Extreme Scale Compu0ng at LRZ Dieter Kranzlmüller Munich Network Management Team Ludwig- Maximilians- Universität München (LMU) & Leibniz SupercompuFng Centre (LRZ) of the Bavarian Academy of Sciences and HumaniFes 1
2 Networks Grid compufng High Performance CompuFng Cloud CompuFng Service Management VirtualizaFon IT Security Leibniz Supercompu0ng Centre of the Bavarian Academy of Sciences and Humani0es With 156 employees + 38 extra staff for more than students and for more than employees including scienfsts VisualisaFon Centre InsFtute Building Cuboid containing compufng systems 72 x 36 x 36 meters D. Kranzlmüller InsFtute Building Lecture Halls Energy Efficieny and Extreme Scaling 4 2
3 Leibniz Supercompu0ng Centre of the Bavarian Academy of Sciences and Humani0es n Computer Centre for all Munich UniversiFes IT Service Provider: Munich ScienFfic Network (MWN) Web servers e- Learning E- Mail Groupware Special equipment: Virtual Reality Laboratory Video Conference Scanners for slides and large documents Large scale plofers IT Competence Centre: Hotline and support ConsulFng (security, networking, scienffc compufng, ) Courses (text edifng, image processing, UNIX, Linux, HPC, ) D. Kranzlmüller Energy Efficieny and Extreme Scaling 5 The Munich Scien0fic Network (MWN) D. Kranzlmüller Energy Efficieny and Extreme Scaling 6 3
4 Leibniz Supercompu0ng Centre of the Bavarian Academy of Sciences and Humani0es n Regional Computer Centre for all Bavarian UniversiFes n Computer Centre for all Munich UniversiFes D. Kranzlmüller Energy Efficieny and Extreme Scaling 7 Virtual Reality & Visualiza0on Centre (LRZ) D. Kranzlmüller Energy Efficieny and Extreme Scaling 8 4
5 Examples from the V2C D. Kranzlmüller Energy Efficieny and Extreme Scaling 9 Leibniz Supercompu0ng Centre of the Bavarian Academy of Sciences and Humani0es n NaFonal SupercompuFng Centre n Regional Computer Centre for all Bavarian UniversiFes n Computer Centre for all Munich UniversiFes D. Kranzlmüller Energy Efficieny and Extreme Scaling 10 5
6 Gauss Centre for Supercompu0ng (GCS) n CombinaFon of the 3 German nafonal supercompufng centers: John von Neumann InsFtute for CompuFng (NIC), Jülich High Performance CompuFng Center Stufgart (HLRS) Leibniz SupercompuFng Centre (LRZ), Garching n. Munich n Founded on 13. April 2007 n HosFng member of PRACE (Partnership for Advanced CompuFng in Europe) D. Kranzlmüller Energy Efficieny and Extreme Scaling 11 Na0onal HPC- Strategy Europäische Höchstleistungsrechenzentren (Tier 0) Nationale Höchstleistungsrechenzentren (Tier 1) Thematische HPC-Zentren, Zentren mit regionalen Aufgaben (Tier 2) HPC-Server (Tier 3) 3 ca. 10 ca. 100 Gauss Centre for Supercomputing (GCS) (Garching, Stuttgart, Jülich) Aachen, Berlin, DKRZ, Dresden, DWD, Karlsruhe, Hannover, MPG/RZG, udgl. Hochschule/Institut D. Kranzlmüller Extreme Scale CompuFng 12 6
7 PRACE Research Infrastructure Created n Establishment of the legal framework PRACE AISBL created with seat in Brussels in April (AssociaFon InternaFonale Sans But LucraFf) 20 members represenfng 20 European countries InauguraFon in Barcelona on June 9 n Funding secured for Million from France, Germany, Italy, Spain Provided as Tier- 0 services on TCO basis Funding decision for 100 Million in The Netherlands expected soon 70+ Million from EC FP7 for preparatory and implementafon Grants INFSO- RI and Complemented by ~ 60 Million from PRACE members D. Kranzlmüller Extreme Scale CompuFng 13 PRACE Tier- 0 Systems n n n n n n GENCI: Bull Cluster, 1.7 PFlop/s CINECA: IBM BG/Q, 2.1 PFlop/s HLRS: Cray XE6, 1 Pflop/s FZJ: IBM Blue Gene/Q, 5.9 PFlop/s BSC: IBM System X idataplex, 1 PFlop/s LRZ: IBM System X idataplex, 3.2 PFlop/s D. Kranzlmüller Extreme Scale CompuFng 14 7
8 PRACE Tier- 0 Access n Single pan- European Peer Review n hfp:// project.eu/call- Announcements?lang=en n Early Access Call in May proposals asked for 1870 Million Core hours 10 projects granted with 328 Million Core hours Principal InvesFgators from D (5), UK (2) NL (1), I (1), PT (1) Involves researchers from 31 insftufons in 12 countries n Further calls being scheduled (every 6 months) Call open February > Access September same year Call open September > Access March next year n Example from 8th Regular Call closed on 15 October 2013 SpaFally adapfve radiafon- hydrodynamical simulafons of reionizafon Project leader: Dr Andreas Pawlik, Max Planck Society, GERMANY Research field: Universe Sciences Resource Awarded: 33,800,000 core hours on SuperMUC, Germany; D. Kranzlmüller Extreme Scale CompuFng 15 Leibniz Supercompu0ng Centre of the Bavarian Academy of Sciences and Humani0es n European SupercompuFng Centre n NaFonal SupercompuFng Centre n Regional Computer Centre for all Bavarian UniversiFes n Computer Centre for all Munich UniversiFes SuperMUC SGI UV SGI Altix Linux Clusters Linux Hosting and Housing D. Kranzlmüller Energy Efficieny and Extreme Scaling 16 8
9 LRZ Video: SuperMUC rendered on SuperMUC by LRZ hfp://youtu.be/olas6iiqwrq D. Kranzlmüller Energy Efficieny and Extreme Scaling 17 Top 500 Supercomputer List (June 2012) D. Kranzlmüller Energy Efficieny and Extreme Scaling 18 9
10 LRZ Supercomputers next to come (2014): SuperMUC Phase II 6.4 PFlop/s D. Kranzlmüller Energy Efficieny and Extreme Scaling 19 SuperMUC Phase D. Kranzlmüller Extreme Scale CompuFng 20 10
11 SuperMUC and its predecessors 10 m D. Kranzlmüller Energy Efficieny and Extreme Scaling 21 SuperMUC and its predecessors 10 m 11 m 22 m D. Kranzlmüller Energy Efficieny and Extreme Scaling 22 11
12 SuperMUC and its predecessors 10 m 11 m 22 m D. Kranzlmüller Energy Efficieny and Extreme Scaling 23 LRZ Building Extension Picture: Horst-Dieter Steinhöfer Figure: Herzog+Partner für StBAM2 (staatl. Hochbauamt München 2) Picture: Ernst A. Graf D. Kranzlmüller Energy Efficieny and Extreme Scaling 24 12
13 Increasing numbers Date System Flop/s Cores 2000 HLRB- I 2 Tflop/s HLRB- II 62 Tflop/s SuperMUC 3200 Tflop/s SuperMUC Phase II Pflop/s D. Kranzlmüller Energy Efficieny and Extreme Scaling 25 SuperMUC Architecture Internet Achive and Backup ~ 30 PB Snapshots/Replika 1.5 PB (separate fire section) $HOME 1.5 PB / 10 GB/s NAS 80 Gbit/s Desaster Recovery Site pruned tree (4:1) non blocking SB-EP 16 cores/node 2 GB/core non blocking WM-EX 40cores/node 6.4 GB/core GPFS for $WORK $SCRATCH 10 PB 200 GB/s Compute nodes 18 Thin node islands (each >8000 cores) Compute nodes 1 Fat node island (8200 cores) è SuperMIG I/O nodes D. Kranzlmüller Energy Efficieny and Extreme Scaling 26 13
14 Power Consump0on at LRZ Stromverbrauch in MWh D. Kranzlmüller Energy Efficieny and Extreme Scaling 27 Cooling SuperMUC D. Kranzlmüller Energy Efficieny and Extreme Scaling 28 14
15 IBM System x idataplex Direct Water Cooled Rack idataplex DWC Rack w/ water cooled nodes (front view) idataplex DWC Rack w/ water cooled nodes (rear view of water manifolds) Torsten Bloth, IBM Lab Services - IBM Corporation D. Kranzlmüller Energy Efficieny and Extreme Scaling 29 IBM idataplex dx360 M4 Torsten Bloth, IBM Lab Services D. Kranzlmüller - IBM Corporation Energy Efficieny and Extreme Scaling 30 15
16 Cooling Infrastructure Photos: StBAM2 (staatl. Hochbauamt München 2) D. Kranzlmüller Energy Efficieny and Extreme Scaling 31 SuperMUC HPL Energy Consump0on Power (kw) :30 18:00 19:30 21:00 22:30 0:00 1:30 3:00 4:30 6:00 7:30 9:00 10:30 12:00 13:30 Time (Clock) Energy Consump0on (kwh) Power (Machine Room, kw) Power (PDU, kw) Power (infrastructure, kw) Energy (Machine Room, kwh) Integrated Power (PDUs, kwh) HPL Start HPL End Energy efficiency (single number in GFlops/Watt) 9,380E- 01 (PDU, 10 minutes resolufon, whole run) 9,359E- 01 (PDU, 1 minutes resolufon, whole run, without cooling) 9,359E- 01 (PDU, 1 minutes resolufon, whole run, cooling included) 8,871E- 01 (machine room measurement, whole run) 7,296E- 01 (infrastructure measurement, whole run) D. Kranzlmüller Energy Efficieny and Extreme Scaling 32 16
17 LRZ D. Kranzlmüller Energy Efficieny and Extreme Scaling 33 LRZ Application Mix q Computational Fluid Dynamics: Optimisation of turbines and wings, noise reduction, air conditioning in trains q Fusion: Plasma in a future fusion reactor (ITER) q Astrophysics: Origin and evolution of stars and galaxies q Solid State Physics: Superconductivity, surface properties q Geophysics: Earth quake scenarios q Material Science: Semiconductors q Chemistry: Catalytic reactions q Medicine and Medical Engineering: Blood flow, aneurysms, air conditioning of operating theatres q Biophysics: Properties of viruses, genome analysis q Climate research: Currents in oceans D. Kranzlmüller Energy Efficieny and Extreme Scaling 34 17
18 1 st LRZ Extreme Scale Workshop n July 2013: 1 st LRZ Extreme Scale Workshop n ParFcipants: 15 internafonal projects n Prerequisites: Successful run on 4 islands (32768 cores) n ParFcipaFng Groups (Soyware packages): LAMMPS, VERTEX, GADGET, WaLBerla, BQCD, Gromacs, APES, SeisSol, CIAO n Successful results (> Cores): Invited to parfcipate in PARCO Conference (Sept. 2013) including a publicafon of their approach D. Kranzlmüller Energy Efficieny and Extreme Scaling 35 1 st LRZ Extreme Scale Workshop n Regular SuperMUC operation 4 Islands maximum Batch scheduling system n Entire SuperMUC reserved 2,5 days for challenge: 0,5 Days for testing 2 Days for executing 16 (of 19) Islands available n Consumed computing time for all groups: 1 hour of runtime = CPU hours 1 year in total D. Kranzlmüller Energy Efficieny and Extreme Scaling 36 18
19 Results (Sustained TFlop/s on cores) Name MPI # cores Description TFlop/s/island TFlop/s max Linpack IBM TOP Vertex IBM Plasma Physics GROMACS IBM, Intel Molecular Modelling Seissol IBM Geophysics walberla IBM Lattice Boltzmann LAMMPS IBM Molecular Modelling APES IBM CFD 6 47 BQCD Intel Quantum Physics D. Kranzlmüller Energy Efficieny and Extreme Scaling 37 Results n 5 Software packages were running on max 16 islands: LAMMPS VERTEX GADGET WaLBerla BQCD n VERTEX reached 245 TFlop/s on 16 islands (A. Marek) D. Kranzlmüller Energy Efficieny and Extreme Scaling 38 19
20 Lessons learned Technical Perspective n Hybrid (MPI+OpenMP) on SuperMUC still slower than pure MPI (e.g. GROMACS), but applications scale to larger core counts (e.g. VERTEX) n Core pinning needs a lot of experience by the programmer n Parallel IO still remains a challenge for many applications, both with regard to stability and speed. n Several stability issues with GPFS were observed for very large jobs due to writing thousands of files in a single directory. This will be improved in the upcoming versions of the application codes. D. Kranzlmüller Energy Efficieny and Extreme Scaling 39 Next Steps n LRZ Extreme Scale Benchmark Suite (LESS) will be available in two versions: public and internal n All teams will have the opportunity to run performance benchmarks after upcoming SuperMUC maintenances n 2 nd LRZ Extreme Scaling Workshop è 2-5 June 2014 Full system production runs on 18 islands with sustained Pflop/s (4h SeisSol, 7h Gadget) 4 existing + 6 additional full system applications High I/O bandwidth in user space possible (66 GB/s of 200 GB/s max) Important goal: minimize energy*runtime (3-15 W/core) n Initiation of the LRZ Partnership Initiative πcs D. Kranzlmüller Energy Efficieny and Extreme Scaling 40 20
21 Partnership Ini0a0ve Computa0onal Sciences πcs n Individualized services for selected scienffic groups flagship role Dedicated point- of- contact Individual support and guidance and targeted training &educafon Planning dependability for use case specific opfmized IT infrastructures Early access to latest IT infrastructure (hard- and soyware) developments and specificafon of future requirements Access to IT competence network and experfse at Computer Science and MathemaFcs departments n Partner contribu0on Embedding IT experts in user groups Joint research projects (including funding) ScienFfic partnership joint publicafons n LRZ benefits Understanding the (current and future) needs and requirements of the respecfve scienffic domain Developing future services for all user groups D. Kranzlmüller Energy Efficieny and Extreme Scaling 41 Partnerschadsini0a0ve Computa0onal Sciences πcs Goals for LRZ: n ThemaFc focusing Environmental Compu0ng n Strengthening science through innovafve, high performance IT technologies and modern IT infrastructures and IT services n Interdisciplinary integrafon (technical and personnel) of scienfsts and (internafonal) research groups n Novel requirements and research results at the interface of scienffic compufng and computer- based sciences n Increased prospects for afracfng research funding through established IT experfse as contribufon to applicafon projects n Outreach and exploitafon D. Kranzlmüller Energy Efficieny and Extreme Scaling 42 21
22 Astrophysics: world's largest simula0ons of supersonic, compressible turbulence Slices through the three- dimensional gas density (top panels) and vor0city (bofom panels) for fully developed, highly compressible, supersonic turbulence, generated by solenoidal driving (led- hand column) and compressive driving (right- hand column), and a grid resolu0on of cells. Federrath C MNRAS 2013;mnras.stt The Author Published by Oxford University Press on behalf of the Royal Astronomical Society D. Kranzlmüller Energy Efficieny and Extreme Scaling 43 SeisSol - Numerical Simula0on of Seismic Wave Phenomena Dr. ChrisFan PelFes, Department of Earth and Environmental Sciences (LMU) Prof. Michael Bader, Department of InformaFcs (TUM) 1,42 Petaflop/s on Cores of SuperMUC (44,5 % of Peak Performance) hfp:// muenchen.de/informafonen_fuer/presse/presseinformafonen/2014/pelfes_seisol.html Picture: Alex Breuer (TUM) / ChrisFan PelFes (LMU) D. Kranzlmüller Energy Efficieny and Extreme Scaling 44 22
23 Extreme Scale Computing at the Leibniz Supercomputing Centre (LRZ) Dieter Kranzlmüller D. Kranzlmüller Energy Efficieny and Extreme Scaling 45 23
Extreme Scaling on Energy Efficient SuperMUC
Extreme Scaling on Energy Efficient SuperMUC Dieter Kranzlmüller Munich Network Management Team Ludwig- Maximilians- Universität München (LMU) & Leibniz SupercompuFng Centre (LRZ) of the Bavarian Academy
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich
Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 19 13:00-13:30 Welcome
Cosmological simulations on High Performance Computers
Cosmological simulations on High Performance Computers Cosmic Web Morphology and Topology Cosmological workshop meeting Warsaw, 12-17 July 2011 Maciej Cytowski Interdisciplinary Centre for Mathematical
Supercomputing Resources in BSC, RES and PRACE
www.bsc.es Supercomputing Resources in BSC, RES and PRACE Sergi Girona, BSC-CNS Barcelona, 23 Septiembre 2015 ICTS 2014, un paso adelante para la RES Past RES members and resources BSC-CNS (MareNostrum)
PRACE hardware, software and services. David Henty, EPCC, [email protected]
PRACE hardware, software and services David Henty, EPCC, [email protected] Why? Weather, Climatology, Earth Science degree of warming, scenarios for our future climate. understand and predict ocean
Access, Documentation and Service Desk. Anupam Karmakar / Application Support Group / Astro Lab
Access, Documentation and Service Desk Anupam Karmakar / Application Support Group / Astro Lab Time to get answer to these questions Who is allowed to use LRZ hardware? My file system is full. How can
PRACE the European HPC Research Infrastructure. Carlos Mérida-Campos, Advisor of Spanish Member at PRACE Council
PRACE the European HPC Research Infrastructure Carlos Mérida-Campos, Advisor of Spanish Member at PRACE Council Barcelona, 6-June-2013 PRACE an European e-infrastructure & ESFRI-list item in operation
Access to the Federal High-Performance Computing-Centers
Access to the Federal High-Performance Computing-Centers [email protected] University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) www.hlrs.de Slide 1 TOP 500 Nov. List German Sites,
International High Performance Computing. Troels Haugbølle Centre for Star and Planet Formation Niels Bohr Institute PRACE User Forum
International High Performance Computing Troels Haugbølle Centre for Star and Planet Formation Niels Bohr Institute PRACE User Forum Why International HPC? Large-scale science projects can require resources
Relations with ISV and Open Source. Stephane Requena GENCI [email protected]
Relations with ISV and Open Source Stephane Requena GENCI [email protected] Agenda of this session 09:15 09:30 Prof. Hrvoje Jasak: Director, Wikki Ltd. «HPC Deployment of OpenFOAM in an Industrial
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Carlo Cavazzoni CINECA Supercomputing Application & Innovation www.cineca.it 21 Aprile 2015 FERMI Name: Fermi Architecture: BlueGene/Q
JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering
JuRoPA Jülich Research on Petaflop Architecture One Year on Hugo R. Falter, COO Lee J Porter, Engineering HPC Advisoy Counsil, Workshop 2010, Lugano 1 Outline The work of ParTec on JuRoPA (HF) Overview
Information about Pan-European HPC infrastructure PRACE. Vít Vondrák IT4Innovations
Information about Pan-European HPC infrastructure PRACE Vít Vondrák IT4Innovations Realizing the ESFRI Vision for an HPC RI European HPC- facilities at the top of an HPC provisioning pyramid Tier- 0: European
walberla: A software framework for CFD applications on 300.000 Compute Cores
walberla: A software framework for CFD applications on 300.000 Compute Cores J. Götz (LSS Erlangen, [email protected]), K. Iglberger, S. Donath, C. Feichtinger, U. Rüde Lehrstuhl für Informatik 10 (Systemsimulation)
OpenMP Programming on ScaleMP
OpenMP Programming on ScaleMP Dirk Schmidl [email protected] Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign
IT security concept documentation in higher education data centers: A template-based approach
IT security concept documentation in higher education data centers: A template-based approach Wolfgang Hommel Leibniz Supercomputing Centre, Munich, Germany EUNIS 2013 June 12th, 2013 Leibniz Supercomputing
Jean-Pierre Panziera Teratec 2011
Technologies for the future HPC systems Jean-Pierre Panziera Teratec 2011 3 petaflop systems : TERA 100, CURIE & IFERC Tera100 Curie IFERC 1.25 PetaFlops 256 TB ory 30 PB disk storage 140 000+ Xeon cores
Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar
Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working
BSC - Barcelona Supercomputer Center
Objectives Research in Supercomputing and Computer Architecture Collaborate in R&D e-science projects with prestigious scientific teams Manage BSC supercomputers to accelerate relevant contributions to
MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit. Prof. Dieter Kranzlmüller (LRZ)
MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit Prof. Dieter Kranzlmüller (LRZ) LRZ HPC-Systems at the End of the UNIX-Era (Years 2000-2002) German national supercomputer Hitachi SR800
Journée Mésochallenges 2015 SysFera and ROMEO Make Large-Scale CFD Simulations Only 3 Clicks Away
SysFera and ROMEO Make Large-Scale CFD Simulations Only 3 Clicks Away Benjamin Depardon SysFera Sydney Tekam Tech-Am ING Arnaud Renard ROMEO Manufacturing with HPC 98% of products will be developed digitally
Supercomputing 2004 - Status und Trends (Conference Report) Peter Wegner
(Conference Report) Peter Wegner SC2004 conference Top500 List BG/L Moors Law, problems of recent architectures Solutions Interconnects Software Lattice QCD machines DESY @SC2004 QCDOC Conclusions Technical
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
Big Data Research at DKRZ
Big Data Research at DKRZ Michael Lautenschlager and Colleagues from DKRZ and Scien:fic Compu:ng Research Group Symposium Big Data in Science Karlsruhe October 7th, 2014 Big Data in Climate Research Big
RSC presents SPbPU supercomputer center and new scientific research results achieved with RSC PetaStream massively parallel supercomputer
Press contacts: Oleg Gorbachov Corporate Communications Director, RSC Group Cell: +7 (967) 052-50-85 Email: [email protected] Press Release RSC presents SPbPU supercomputer center and new scientific
Linux Cluster Computing An Administrator s Perspective
Linux Cluster Computing An Administrator s Perspective Robert Whitinger Traques LLC and High Performance Computing Center East Tennessee State University : http://lxer.com/pub/self2015_clusters.pdf 2015-Jun-14
Jezelf Groen Rekenen met Supercomputers
Jezelf Groen Rekenen met Supercomputers Symposium Groene ICT en duurzaamheid: Nieuwe energie in het hoger onderwijs Walter Lioen Groepsleider Supercomputing About SURFsara SURFsara
Parallel Software usage on UK National HPC Facilities 2009-2015: How well have applications kept up with increasingly parallel hardware?
Parallel Software usage on UK National HPC Facilities 2009-2015: How well have applications kept up with increasingly parallel hardware? Dr Andrew Turner EPCC University of Edinburgh Edinburgh, UK [email protected]
High Performance Computing at CEA
High Performance Computing at CEA Thierry Massard CEA-DAM Mascot 2012 Meeting 23/03/2012 1 1 www-hpc.cea.fr/en Paris CEA/DIF (Bruyères-Le-Châtel) Saclay CEA Supercomputing Complex Mascot 2012 Meeting 23/03/2012
David Vicente Head of User Support BSC
www.bsc.es Programming MareNostrum III David Vicente Head of User Support BSC Agenda WEDNESDAY - 17-04-13 9:00 Introduction to BSC, PRACE PATC and this training 9:30 New MareNostrum III the views from
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
HPC-related R&D in 863 Program
HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010 Outline The 863 key project on HPC and Grid Status and Next 5 years 863 efforts on
Barry Bolding, Ph.D. VP, Cray Product Division
Barry Bolding, Ph.D. VP, Cray Product Division 1 Corporate Overview Trends in Supercomputing Types of Supercomputing and Cray s Approach The Cloud The Exascale Challenge Conclusion 2 Slide 3 Seymour Cray
High Performance Computing in the Multi-core Area
High Performance Computing in the Multi-core Area Arndt Bode Technische Universität München Technology Trends for Petascale Computing Architectures: Multicore Accelerators Special Purpose Reconfigurable
How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications
Amazon Cloud Performance Compared David Adams Amazon EC2 performance comparison How does EC2 compare to traditional supercomputer for scientific applications? "Performance Analysis of High Performance
COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)
COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1) Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State University
10- High Performance Compu5ng
10- High Performance Compu5ng (Herramientas Computacionales Avanzadas para la Inves6gación Aplicada) Rafael Palacios, Fernando de Cuadra MRE Contents Implemen8ng computa8onal tools 1. High Performance
How Cineca supports IT
How Cineca supports IT Topics CINECA: an overview Systems and Services for Higher Education HPC for Research Activities and Industries Cineca: the Consortium Not For Profit Founded in 1969 HPC FERMI: TOP500
SGI High Performance Computing
SGI High Performance Computing Accelerate time to discovery, innovation, and profitability 2014 SGI SGI Company Proprietary 1 Typical Use Cases for SGI HPC Products Large scale-out, distributed memory
Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1
Introduction to High Performance Cluster Computing Cluster Training for UCL Part 1 What is HPC HPC = High Performance Computing Includes Supercomputing HPCC = High Performance Cluster Computing Note: these
Automating Big Data Benchmarking for Different Architectures with ALOJA
www.bsc.es Jan 2016 Automating Big Data Benchmarking for Different Architectures with ALOJA Nicolas Poggi, Postdoc Researcher Agenda 1. Intro on Hadoop performance 1. Current scenario and problematic 2.
Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science
Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science Call for Expression of Interest (EOI) for the Supply, Installation
SURFsara HPC Cloud Workshop
SURFsara HPC Cloud Workshop doc.hpccloud.surfsara.nl UvA workshop 2016-01-25 UvA HPC Course Jan 2016 Anatoli Danezi, Markus van Dijk [email protected] Agenda Introduction and Overview (current
JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
walberla: A software framework for CFD applications
walberla: A software framework for CFD applications U. Rüde, S. Donath, C. Feichtinger, K. Iglberger, F. Deserno, M. Stürmer, C. Mihoubi, T. Preclic, D. Haspel (all LSS Erlangen), N. Thürey (LSS Erlangen/
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements
Data management challenges in todays Healthcare and Life Sciences ecosystems
Data management challenges in todays Healthcare and Life Sciences ecosystems Jose L. Alvarez Principal Engineer, WW Director Life Sciences [email protected] Evolution of Data Sets in Healthcare
and RISC Optimization Techniques for the Hitachi SR8000 Architecture
1 KONWIHR Project: Centre of Excellence for High Performance Computing Pseudo-Vectorization and RISC Optimization Techniques for the Hitachi SR8000 Architecture F. Deserno, G. Hager, F. Brechtefeld, G.
Introduction History Design Blue Gene/Q Job Scheduler Filesystem Power usage Performance Summary Sequoia is a petascale Blue Gene/Q supercomputer Being constructed by IBM for the National Nuclear Security
Comparison of computational services at LRZ
Dedicated resources: Housing and virtual Servers Dr. Christoph Biardzki, Group Leader IT Infrastructure and Services 1 Comparison of computational services at LRZ SuperMUC Linux- Cluster Linux-Cluster
www.bsc.es MareNostrum 3 Javier Bartolomé BSC System Head Barcelona, April 2015
www.bsc.es MareNostrum 3 Javier Bartolomé BSC System Head Barcelona, April 2015 Index MareNostrum 3 Overview Compute Racks Infiniband Racks Management Racks GPFS Network Racks HPC GPFS Storage Hardware
Pedraforca: ARM + GPU prototype
www.bsc.es Pedraforca: ARM + GPU prototype Filippo Mantovani Workshop on exascale and PRACE prototypes Barcelona, 20 May 2014 Overview Goals: Test the performance, scalability, and energy efficiency of
Power Efficiency Metrics for the Top500. Shoaib Kamil and John Shalf CRD/NERSC Lawrence Berkeley National Lab
Power Efficiency Metrics for the Top500 Shoaib Kamil and John Shalf CRD/NERSC Lawrence Berkeley National Lab Power for Single Processors HPC Concurrency on the Rise Total # of Processors in Top15 350000
Trends in High-Performance Computing for Power Grid Applications
Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views
Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC
Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC Mississippi State University Public university (Land Grant) founded in 1878 Traditional
High-Performance Computing and Big Data Challenge
High-Performance Computing and Big Data Challenge Dr Violeta Holmes Matthew Newall The University of Huddersfield Outline High-Performance Computing E-Infrastructure Top500 -Tianhe-II UoH experience: HPC
Parallel file I/O bottlenecks and solutions
Mitglied der Helmholtz-Gemeinschaft Parallel file I/O bottlenecks and solutions Views to Parallel I/O: Hardware, Software, Application Challenges at Large Scale Introduction SIONlib Pitfalls, Darshan,
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
YALES2 porting on the Xeon- Phi Early results
YALES2 porting on the Xeon- Phi Early results Othman Bouizi Ghislain Lartigue Innovation and Pathfinding Architecture Group in Europe, Exascale Lab. Paris CRIHAN - Demi-journée calcul intensif, 16 juin
Pros and Cons of HPC Cloud Computing
CloudStat 211 Pros and Cons of HPC Cloud Computing Nils gentschen Felde Motivation - Idea HPC Cluster HPC Cloud Cluster Management benefits of virtual HPC Dynamical sizing / partitioning Loadbalancing
SURFsara HPC Cloud Workshop
SURFsara HPC Cloud Workshop www.cloud.sara.nl Tutorial 2014-06-11 UvA HPC and Big Data Course June 2014 Anatoli Danezi, Markus van Dijk [email protected] Agenda Introduction and Overview (current
Big Data and the Earth Observation and Climate Modelling Communities: JASMIN and CEMS
Big Data and the Earth Observation and Climate Modelling Communities: JASMIN and CEMS Workshop on the Future of Big Data Management 27-28 June 2013 Philip Kershaw Centre for Environmental Data Archival
Visit to the National University for Defense Technology Changsha, China. Jack Dongarra. University of Tennessee. Oak Ridge National Laboratory
Visit to the National University for Defense Technology Changsha, China Jack Dongarra University of Tennessee Oak Ridge National Laboratory June 3, 2013 On May 28-29, 2013, I had the opportunity to attend
Fast Parallel Algorithms for Computational Bio-Medicine
Fast Parallel Algorithms for Computational Bio-Medicine H. Köstler, J. Habich, J. Götz, M. Stürmer, S. Donath, T. Gradl, D. Ritter, D. Bartuschat, C. Feichtinger, C. Mihoubi, K. Iglberger (LSS Erlangen)
Altix Usage and Application Programming. Welcome and Introduction
Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang
A-CLASS The rack-level supercomputer platform with hot-water cooling
A-CLASS The rack-level supercomputer platform with hot-water cooling INTRODUCTORY PRESENTATION JUNE 2014 Rev 1 ENG COMPUTE PRODUCT SEGMENTATION 3 rd party board T-MINI P (PRODUCTION): Minicluster/WS systems
BENCHMARKING V ISUALIZATION TOOL
Copyright 2014 Splunk Inc. BENCHMARKING V ISUALIZATION TOOL J. Green Computer Scien
