Introduction Physics at CSC. Tomasz Malkiewicz Jan Åström

Size: px
Start display at page:

Download "Introduction Physics at CSC. Tomasz Malkiewicz Jan Åström"

Transcription

1 Introduction Physics at CSC Tomasz Malkiewicz Jan Åström

2 CSC Autumn School in Computational Physics Monday November 25 Tuesday November 26 Course intro, (T. Malkiewicz, J. Åström) Round robin: how CSC can help your research (T. Malkiewicz) Coffee break Coffee break Massively parallel computations (K. Rummukainen) Computational physics with Xeon Phi and GPU (F. Robertsén) Lunch Lunch Advanced unix for physicists (J. Lento) Advanced unix for physicists (J. Lento) Debugging and code optimization (S. Ilvonen / J. Enkovaara) Introduction to glaciology and numerical modelling of glacier dynamics, example: Vestfonna ice-cap, Svalbard (M. Schäfer) Coffee break Coffee break On the diversity of particle-based methods (J. Åström) Continuum models and assumptions (T. Zwinger) Archive and IO + demo on FGI and Cloud (K. Mattila, R. Laurikainen) Scientific visualization, focus on geophysics (J. Hokkanen) 11/25/2013 CSC Autumn School in Comp. Phys. ' supercomputers guided tour on Tuesday at 12:40

3 Aims Rather lecture than conference-oriented presentations Slides/abstracts available in advance Try to make potentially difficult things look relatively easy to learn and understand Skip items that have less significance in everyday work of physicists A hands-on sessions included in most lecture session allow to practice the just learned subjects 11/25/2013 CSC Autumn School in Comp. Phys. '13 3

4 Physics at CSC Physics at supercomputers Resources available for physicists What s new Future Why and when to use supercomputers Courses of interest for physicists Physics people at CSC Q/A 11/25/2013 CSC Autumn School in Comp. Phys. '13 4

5 Physics at supercomputers Physics is a branch of science concerned with the nature, structure and properties of matter, ranging from the smallest scale of atoms and sub-atomic particles, to the Universe as a whole. Physics includes experiment and theory and involves both fundamental research driven by curiosity, as well as applied research linked to technology. EPS report, 2013 Supercomputer is a computer at the frontline of contemporary processing capacity particularly speed of calculation. Fastest supercomputer: China Tianhe-2 with petaflop/s (quadrillions of calculations per second) on the LINPACK benchmark 11/25/2013 CSC Autumn School in Comp. Phys. '13 5

6 Usage of processor time by discipline 1H/2013 3% 2% 2% 2% Physics 4% 34% Nanoscience 6% 15% 5% Total 84.5 million billing units Chemistry Astrophysics Computational fluid dynamics Biosciences Grid usage Materials sciences 27% Computational drug design Other 11/25/2013 CSC Autumn School in Comp. Phys. '13 6

7 Application software usage (maintained by CSC) according to processor time 1H/2013 2% 2% 2% 2% 3% 2% 36% GPAW Gromacs 4% 10% 3% Total 22.3 million core hours CP2K Gaussian Molpro NAMD ADF VASP Matlab Turbomole 34% Other 11/25/2013 CSC Autumn School in Comp. Phys. '13 7

8 New projects by discipline 1H/ Biosciences Computer science Language research Total 195 new projects Physics Grid usage Chemistry Structural analysis Social sciences Medical sciences Computational fluid dynamics Other 11/25/2013 CSC Autumn School in Comp. Phys. '13 8

9 Users of computing servers by organization 2012 (total 1463 users) University of Helsinki Aalto University University of Jyväskylä University of Turku 61 University of Oulu 78 University of Eastern Finland Tampere University of Technology CSC (PRACE) University of Tampere CSC (Projects) Other 11/25/2013 CSC Autumn School in Comp. Phys. '13 9

10 Foreign user accounts in CSC's server environment 1H/ Germany France U.K. Italy Total 1121 users from 69 countries India Poland China Russia USA Spain The Netherlands Other (58 countries) 11/25/2013 CSC Autumn School in Comp. Phys. '13 10

11 Currently available computing resources Massive computational challenges: Sisu > cores, >23TB memory Theoretical peak performance > 240 Tflop/s HP-cluster Taito (+ Vuori by 1/2014) Small and medium-sized tasks Theoretical peak performance 180 Tflop/s (40) Application server Hippu Interactive usage, without job scheduler Postprocessing, e.g. vizualization FGI CSC cloud services 11/25/2013 CSC Autumn School in Comp. Phys. '13 11

12 11/25/2013 CSC Autumn School in Comp. Phys. '13 12

13 Last site level blackout in the early 1980s Power distribution (FinGrid) CSC started ITI Curve monitoring early Feb /25/2013 CSC Autumn School in Comp. Phys. '13 13

14 11/25/2013 CSC Autumn School in Comp. Phys. '13 14

15 Sisu now 11/25/2013 CSC Autumn School in Comp. Phys. '13 15

16 Sisu rear view 11/25/2013 CSC Autumn School in Comp. Phys. '13 16

17 Taito (HP) hosted in SGI Ice Cube R80 11/25/2013 CSC Autumn School in Comp. Phys. '13 17

18 SGI Ice Cube R80 11/25/2013 CSC Autumn School in Comp. Phys. '13 18

19 Taito 11/25/2013 CSC Autumn School in Comp. Phys. '13 19

20 Cray Dragonfly Topology All-to-all network between groups 2 dimensional all-to-all network in a group Source: Robert Alverson, Cray Hot Interconnects 2012 keynote 11/25/2013 Optical uplinks to inter-group net CSC Autumn School in Comp. Phys. '13 20

21 GFlop/s Performance of numerical libraries DGEMM 1000x1000 Single-Core Performance Turbo Peak (when only 1 core is 3.5GHz * 8 Flop/Hz GHz * 8 Flop/Hz Sandy Bridge 2.7GHz Opteron Barcelona 2.3GHz (Louhi) GHz * 4 Flop/Hz ATLAS 3.8 ATLAS 3.10 ACML 5.2 Ifort 12.1 RedHat 6.2 RPM matmul MKL 12.1 LibSci ACML MKL 11 MKL the best choice on Sandy Bridge, for now. (On Cray, LibSci a good alternative) 11/25/2013 CSC Autumn School in Comp. Phys. '13 21

22 Sisu&Taito vs. Louhi&Vuori vs. FGI vs. Local Cluster Availability CPU Sisu&Taito (Phase 1) Available Intel Sandy Bridge, 2 x 8 cores, 2.6 GHz, Xeon E CSC Autumn School in Comp. Phys. '13 Vuori FGI Merope Available (by 1/2014) 2.6 GHz AMD Opteron and Intel Xeon Available Available Intel Xeon, 2 x 6 cores, 2.7 GHz, X5650 Interconnect Aries / FDR IB QDR IB QDR IB Cores / RAM/core 2 / 4 GB 16x 256GB/node 1 / 2 / 8 GB 2 / 4 / 8 GB 4 / 8 GB Tflops 244 / GPU nodes in Phase Disc space 2.4 PB 145 TB 1+ PB 100 TB 11/25/

23 What s new 11/25/2013 CSC Autumn School in Comp. Phys. '13 23

24 Future Phase 1 Phase 2 Cray HP Cray HP Deployment Done Done Probably 2014 CPU Interconnect Intel Sandy Bridge GHz Aries FDR InfiniBand (56 Gbps) Next generation processors Aries EDR InfiniBand (100 Gbps) Cores ~40000 ~17000 Tflops (5x Vuori) (15x Vuori) Tflops 11/25/2013 total CSC 424 Autumn School in Comp. Phys. '

25 CSC Computing Capacity Standardized processors max. capacity (80%) capacity used Cray XC Cray XT IBM eserver Cluster 1600 Two Compaq Alpha Servers (Lempo and Hiisi) Cray XT4 DC Cray XT4 QC HP CP4000 BL Proliant 6C AMD HP Proliant SL230s Cray X-MP/416 Convex 3840 SGI R4400 IBM SP2 IBM SP1 Cray C94 Cray T3E expanded Cray T3E (224 proc) (192 proc) Cray T3E expanded (512 proc) SGI upgrade SGI Origin 2000 IBM upgrade Cray T3E IBM SP2 decommissioned (64 proc) SGI upgrade 1/1998 IBM SP Power3 Compaq Alpha cluster (Clux) HP DL 145 Proliant Sun Fire 25K Federation HP switch upgrade on IBM Cray T3E decommissioned 12/2002 Clux and Hiisi decommissioned 2/2005 HP CP4000BL Proliant 465c DC AMD Murska decommissioned 6/2012 Convex C Top500 rating The Top500 lists were started in Cray T3E IBM eserver p690 Cray XT4/XT5 Cray XC SGI Power Challenge IBM SP2 SGI Origin 2000 HP Proliant 465c DC HP Proliant SL230s Cray X-MP IBM SP Power IBM SP1 Cray C94 Digital AlphaServer HP Proliant 465c 6C 500. Convex C /25/2013 CSC Autumn School in Comp. Phys. '13 25

26 IT summary Cray XC30 supercomputer (Sisu) Fastest computer in Finland Phase 1: 385 kw, 244 Tflop/s, 16 x 2 GB cores per computing node, 4 x 256 GB login nodes Phase 2: ~1700 Tflop/s Very high density, large racks PRACE prototype (coming late 2013 and 2014) Intel Xeon Phi coprocessors NVIDIA next generation GPUs 11/25/2013 CSC Autumn School in Comp. Phys. '13 26

27 IT summary cont. HP (Taito) 1152 Intel CPUs 16 x 4 GB cores per node 16 fat nodes with 16 x16 GB cores per node 6 x 64 GB login nodes 180 TFlop/s 30 kw 47 U racks HPC storage PB of fast parallel storage Supports Cray and HP systems 11/25/2013 CSC Autumn School in Comp. Phys. '13 27

28 ns/day Why and when to use HPC? Lipid MD, 120katoms, PME, Gromacs louhi vuori taito sisu cores 11/25/2013 CSC Autumn School in Comp. Phys. '13 28

29 Courses at CSC CSC courses: CSC HPC Summer School Sisu (Cray) workshops Taito (HP) workshops December 2013 Intel Xeon Phi programming 11/25/2013 CSC Autumn School in Comp. Phys. '13 29

30 Physics people at CSC Particle based methods: Jan Åström Geophysics/glaciology: Thomas Zwinger Nanoscience/semiconductors: Jura Tarus Nuclear/particle physics: Tomasz Malkiewicz Partial differential equations/elmer: Peter Råback A few with background in DFT: Juha Lento Quantum chemistry: Nino Runeberg A few with numerical mathematics background Several with advanced code optimisation skills Everything related to HPC in general 11/25/2013 CSC Autumn School in Comp. Phys. '13 30

31 Q/A: Need disk space 3.8 PB on DDN $HOME, $USERAPPL: 20 GB $WRKDIR (not backed up), soft quota: 5 TB HPC ARCHIVE: 2 TB / user, common between Cray and HP /tmp (around 1.8 TB) to be used for compiling codes Disk space through IDA 11/25/2013 CSC Autumn School in Comp. Phys. '13 31

32 Disks at Kajaani taito.csc.fi login nodes sisu.csc.fi login nodes Your workstation irods client compute nodes compute nodes SUI $TMPDIR $TMPDIR $TMPDIR $TMPDIR $WRKDIR $HOME $TMPDIR New tape $ARCHIVE in Espoo irods interface disk cache icp, iput, ils, irm $USERAPPL $HOME/xyz 11/25/2013 CSC Autumn School in Comp. Phys. '13 icp 32

33 Datasets served by TTA Projects funded by Finnish Academy (akatemiahankkeet, huippuyksiköt, tutkimusohjelmat and tutkimusinfrastruktuurit) 1 PB capacity Universities and Polytechnics 1 PB capacity ESFRI-projects (ex. BBMRI, CLARIN) Other important research projects via special application process SA hankke et 1 PB ESFRIt, FSD, pilotit ja lisäosuudet 1 PB Korkea -koulut 1 PB 11/25/2013 CSC Autumn School in Comp. Phys. '13 33

34 Q/A: Is there a single place to look for info regarding supercomputers? User manuals Support [email protected] 11/25/2013 CSC Autumn School in Comp. Phys. '13 34

35 Q/A: Need large capacity -> Grand Challenges Normal GC (in half a year / year) new CSC resources available for a year no bottom limit for number of cores, up to 50% Special GC call (mainly for Cray) (when needed) possibility for short (day or less) runs with the whole Cray Remember also PRACE/DECI 11/25/2013 CSC Autumn School in Comp. Phys. '13 35

36 Q/A: Is Cloud something for me? ->example: Taito Taito cluster: two types of nodes, HPC and cloud HPC node HPC node Cloud node Cloud node Host OS: RHEL Virtual machine Guest OS: Ubuntu Virtual machine Guest OS: Windows 11/25/2013 CSC Autumn School in Comp. Phys. '13 36

37 Q/A: How fast is the I/O? I/O speed Infiniband interconnect 56 Gbit/s, tested to give 20 GB/s (peak, on DDN) i-commands 100 MB/s = 1 Gbit/s (10-16 thread, if > 32 MB then spreads, Kernel schedules) SUI: 11 MB/s, 1 GB = 1 min Fastest laptop:120 MB/s, disc speed 40 MB/s write 10 Gbit/s ethernet = 1.2 GB/s Metadata operations for Lustre take long, therefore not good to have many small files 11/25/2013 CSC Autumn School in Comp. Phys. '13 37

38 Q/A: Fastest way to connect? NoMachine NX server for remote access 11/25/2013 CSC Autumn School in Comp. Phys. '13 38

39 Q/A: How to get access to CSC supercomputers? sui.csc.fi (HAKA authentication) sing up 11/25/2013 CSC Autumn School in Comp. Phys. '13 39

40 ns/day Performance comparison Per core performance ~2 x compared to Vuori Better interconnects enhance scaling Larger memory Smartest collective communications The most powerful computer(s) in Finland Big investment Quick summary Gromacs performance cores Taito Sisu FGI Vuori Louhi 11/25/2013 CSC Autumn School in Comp. Phys. '13 40

41 Round robin

42 Round robin What are your research interest? What are your needs in terms of computing? How CSC can help? Any comments towards CSC? 42

A National Computing Grid: FGI

A National Computing Grid: FGI A National Computing Grid: FGI Vera Hansper, Ulf Tigerstedt, Kimmo Mattila, Luis Alves 3/10/2012 FGI Grids in Finland : a short history 3/10/2012 FGI In the beginning, we had M-Grid Interest in Grid technology

More information

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Building a Top500-class Supercomputing Cluster at LNS-BUAP Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad

More information

CSC computing resources. Ville Savolainen, Tommi Nyrönen and Tomasz Malkiewicz CSC IT Center for Science Ltd.

CSC computing resources. Ville Savolainen, Tommi Nyrönen and Tomasz Malkiewicz CSC IT Center for Science Ltd. CSC computing resources Ville Savolainen, Tommi Nyrönen and Tomasz Malkiewicz CSC IT Center for Science Ltd. Program 10-11:30 s Interactive! Q&A welcome 11:30-12:00 Round robin / free discussion 12:00

More information

Building Clusters for Gromacs and other HPC applications

Building Clusters for Gromacs and other HPC applications Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network

More information

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) ( TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical

More information

1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations

1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.

More information

Kriterien für ein PetaFlop System

Kriterien für ein PetaFlop System Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working

More information

Cluster Implementation and Management; Scheduling

Cluster Implementation and Management; Scheduling Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /

More information

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC Mississippi State University Public university (Land Grant) founded in 1878 Traditional

More information

Clusters: Mainstream Technology for CAE

Clusters: Mainstream Technology for CAE Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux

More information

JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering

JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering JuRoPA Jülich Research on Petaflop Architecture One Year on Hugo R. Falter, COO Lee J Porter, Engineering HPC Advisoy Counsil, Workshop 2010, Lugano 1 Outline The work of ParTec on JuRoPA (HF) Overview

More information

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

GPU System Architecture. Alan Gray EPCC The University of Edinburgh GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems

More information

Sun Constellation System: The Open Petascale Computing Architecture

Sun Constellation System: The Open Petascale Computing Architecture CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical

More information

Overview of HPC systems and software available within

Overview of HPC systems and software available within Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster

More information

FLOW-3D Performance Benchmark and Profiling. September 2012

FLOW-3D Performance Benchmark and Profiling. September 2012 FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute

More information

Estonian Scientific Computing Infrastructure (ETAIS)

Estonian Scientific Computing Infrastructure (ETAIS) Estonian Scientific Computing Infrastructure (ETAIS) Week #7 Hardi Teder [email protected] University of Tartu March 27th 2013 Overview Estonian Scientific Computing Infrastructure Estonian Research infrastructures

More information

HPC Update: Engagement Model

HPC Update: Engagement Model HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value

More information

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA

More information

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007

More information

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance

More information

Supercomputing Resources in BSC, RES and PRACE

Supercomputing Resources in BSC, RES and PRACE www.bsc.es Supercomputing Resources in BSC, RES and PRACE Sergi Girona, BSC-CNS Barcelona, 23 Septiembre 2015 ICTS 2014, un paso adelante para la RES Past RES members and resources BSC-CNS (MareNostrum)

More information

Altix Usage and Application Programming. Welcome and Introduction

Altix Usage and Application Programming. Welcome and Introduction Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang

More information

COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)

COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1) COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1) Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State University

More information

PLGrid Infrastructure Solutions For Computational Chemistry

PLGrid Infrastructure Solutions For Computational Chemistry PLGrid Infrastructure Solutions For Computational Chemistry Mariola Czuchry, Klemens Noga, Mariusz Sterzel ACC Cyfronet AGH 2 nd Polish- Taiwanese Conference From Molecular Modeling to Nano- and Biotechnology,

More information

HPC-related R&D in 863 Program

HPC-related R&D in 863 Program HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010 Outline The 863 key project on HPC and Grid Status and Next 5 years 863 efforts on

More information

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications Amazon Cloud Performance Compared David Adams Amazon EC2 performance comparison How does EC2 compare to traditional supercomputer for scientific applications? "Performance Analysis of High Performance

More information

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being

More information

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training

More information

How Cineca supports IT

How Cineca supports IT How Cineca supports IT Topics CINECA: an overview Systems and Services for Higher Education HPC for Research Activities and Industries Cineca: the Consortium Not For Profit Founded in 1969 HPC FERMI: TOP500

More information

Jezelf Groen Rekenen met Supercomputers

Jezelf Groen Rekenen met Supercomputers Jezelf Groen Rekenen met Supercomputers Symposium Groene ICT en duurzaamheid: Nieuwe energie in het hoger onderwijs Walter Lioen Groepsleider Supercomputing About SURFsara SURFsara

More information

Lecture 1: the anatomy of a supercomputer

Lecture 1: the anatomy of a supercomputer Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers of the future may have only 1,000 vacuum tubes and perhaps weigh 1½ tons. Popular Mechanics, March 1949

More information

Parallel Computing. Introduction

Parallel Computing. Introduction Parallel Computing Introduction Thorsten Grahs, 14. April 2014 Administration Lecturer Dr. Thorsten Grahs (that s me) [email protected] Institute of Scientific Computing Room RZ 120 Lecture Monday 11:30-13:00

More information

Performance Evaluation of Amazon EC2 for NASA HPC Applications!

Performance Evaluation of Amazon EC2 for NASA HPC Applications! National Aeronautics and Space Administration Performance Evaluation of Amazon EC2 for NASA HPC Applications! Piyush Mehrotra!! J. Djomehri, S. Heistand, R. Hood, H. Jin, A. Lazanoff,! S. Saini, R. Biswas!

More information

High Performance Computing in CST STUDIO SUITE

High Performance Computing in CST STUDIO SUITE High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver

More information

IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez

IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez IT of SPIM Data Storage and Compression EMBO Course - August 27th Jeff Oegema, Peter Steinbach, Oscar Gonzalez 1 Talk Outline Introduction and the IT Team SPIM Data Flow Capture, Compression, and the Data

More information

HP ProLiant SL270s Gen8 Server. Evaluation Report

HP ProLiant SL270s Gen8 Server. Evaluation Report HP ProLiant SL270s Gen8 Server Evaluation Report Thomas Schoenemeyer, Hussein Harake and Daniel Peter Swiss National Supercomputing Centre (CSCS), Lugano Institute of Geophysics, ETH Zürich [email protected]

More information

SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center

SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center SRNWP Workshop HP Solutions and Activities in Climate & Weather Research Michael Riedmann European Performance Center Agenda A bit of marketing: HP Solutions for HPC A few words about recent Met deals

More information

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working

More information

Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science

Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science Call for Expression of Interest (EOI) for the Supply, Installation

More information

Linux Cluster Computing An Administrator s Perspective

Linux Cluster Computing An Administrator s Perspective Linux Cluster Computing An Administrator s Perspective Robert Whitinger Traques LLC and High Performance Computing Center East Tennessee State University : http://lxer.com/pub/self2015_clusters.pdf 2015-Jun-14

More information

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing

More information

Parallel Programming Survey

Parallel Programming Survey Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory

More information

PRACE hardware, software and services. David Henty, EPCC, [email protected]

PRACE hardware, software and services. David Henty, EPCC, d.henty@epcc.ed.ac.uk PRACE hardware, software and services David Henty, EPCC, [email protected] Why? Weather, Climatology, Earth Science degree of warming, scenarios for our future climate. understand and predict ocean

More information

Overview of HPC Resources at Vanderbilt

Overview of HPC Resources at Vanderbilt Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources

More information

OpenMP Programming on ScaleMP

OpenMP Programming on ScaleMP OpenMP Programming on ScaleMP Dirk Schmidl [email protected] Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign

More information

Deploying and managing a Visualization Farm @ Onera

Deploying and managing a Visualization Farm @ Onera Deploying and managing a Visualization Farm @ Onera Onera Scientific Day - October, 3 2012 Network and computing department (DRI), Onera P.F. Berte [email protected] Plan Onera global HPC

More information

Supercomputing 2004 - Status und Trends (Conference Report) Peter Wegner

Supercomputing 2004 - Status und Trends (Conference Report) Peter Wegner (Conference Report) Peter Wegner SC2004 conference Top500 List BG/L Moors Law, problems of recent architectures Solutions Interconnects Software Lattice QCD machines DESY @SC2004 QCDOC Conclusions Technical

More information

Using NeSI HPC Resources. NeSI Computational Science Team ([email protected])

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team ([email protected]) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

Hadoop on the Gordon Data Intensive Cluster

Hadoop on the Gordon Data Intensive Cluster Hadoop on the Gordon Data Intensive Cluster Amit Majumdar, Scientific Computing Applications Mahidhar Tatineni, HPC User Services San Diego Supercomputer Center University of California San Diego Dec 18,

More information

David Vicente Head of User Support BSC

David Vicente Head of User Support BSC www.bsc.es Programming MareNostrum III David Vicente Head of User Support BSC Agenda WEDNESDAY - 17-04-13 9:00 Introduction to BSC, PRACE PATC and this training 9:30 New MareNostrum III the views from

More information

Stovepipes to Clouds. Rick Reid Principal Engineer SGI Federal. 2013 by SGI Federal. Published by The Aerospace Corporation with permission.

Stovepipes to Clouds. Rick Reid Principal Engineer SGI Federal. 2013 by SGI Federal. Published by The Aerospace Corporation with permission. Stovepipes to Clouds Rick Reid Principal Engineer SGI Federal 2013 by SGI Federal. Published by The Aerospace Corporation with permission. Agenda Stovepipe Characteristics Why we Built Stovepipes Cluster

More information

Recommended hardware system configurations for ANSYS users

Recommended hardware system configurations for ANSYS users Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range

More information

Access to the Federal High-Performance Computing-Centers

Access to the Federal High-Performance Computing-Centers Access to the Federal High-Performance Computing-Centers [email protected] University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) www.hlrs.de Slide 1 TOP 500 Nov. List German Sites,

More information

Cloud Computing. Alex Crawford Ben Johnstone

Cloud Computing. Alex Crawford Ben Johnstone Cloud Computing Alex Crawford Ben Johnstone Overview What is cloud computing? Amazon EC2 Performance Conclusions What is the Cloud? A large cluster of machines o Economies of scale [1] Customers use a

More information

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended

More information

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools

More information

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2015

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2015 Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, November 2015 InfiniBand FDR and EDR Continue Growth and Leadership The Most Used Interconnect On The TOP500

More information

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 19 13:00-13:30 Welcome

More information

ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009

ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009 ECLIPSE Best Practices Performance, Productivity, Efficiency March 29 ECLIPSE Performance, Productivity, Efficiency The following research was performed under the HPC Advisory Council activities HPC Advisory

More information

Scientific Computing Data Management Visions

Scientific Computing Data Management Visions Scientific Computing Data Management Visions ELI-Tango Workshop Szeged, 24-25 February 2015 Péter Szász Group Leader Scientific Computing Group ELI-ALPS Scientific Computing Group Responsibilities Data

More information

A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS

A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS SUDHAKARAN.G APCF, AERO, VSSC, ISRO 914712564742 [email protected] THOMAS.C.BABU APCF, AERO, VSSC, ISRO 914712565833

More information

benchmarking Amazon EC2 for high-performance scientific computing

benchmarking Amazon EC2 for high-performance scientific computing Edward Walker benchmarking Amazon EC2 for high-performance scientific computing Edward Walker is a Research Scientist with the Texas Advanced Computing Center at the University of Texas at Austin. He received

More information

SR-IOV In High Performance Computing

SR-IOV In High Performance Computing SR-IOV In High Performance Computing Hoot Thompson & Dan Duffy NASA Goddard Space Flight Center Greenbelt, MD 20771 [email protected] [email protected] www.nccs.nasa.gov Focus on the research side

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

SURFsara HPC Cloud Workshop

SURFsara HPC Cloud Workshop SURFsara HPC Cloud Workshop www.cloud.sara.nl Tutorial 2014-06-11 UvA HPC and Big Data Course June 2014 Anatoli Danezi, Markus van Dijk [email protected] Agenda Introduction and Overview (current

More information

Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008

Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008 Sun in HPC Update for IDC HPC User Forum Tucson, AZ, Sept 2008 Bjorn Andersson Director, HPC Marketing Makia Minich Lead Architect, Sun HPC Software, Linux Edition Sun Microsystems Core Focus Areas for

More information

Cosmological simulations on High Performance Computers

Cosmological simulations on High Performance Computers Cosmological simulations on High Performance Computers Cosmic Web Morphology and Topology Cosmological workshop meeting Warsaw, 12-17 July 2011 Maciej Cytowski Interdisciplinary Centre for Mathematical

More information

The CNMS Computer Cluster

The CNMS Computer Cluster The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the

More information

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of

More information

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures 11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the

More information

Grids Computing and Collaboration

Grids Computing and Collaboration Grids Computing and Collaboration Arto Teräs CSC, the Finnish IT center for science University of Pune, India, March 12 th 2007 Grids Computing and Collaboration / Arto Teräs 2007-03-12 Slide

More information

Trends in High-Performance Computing for Power Grid Applications

Trends in High-Performance Computing for Power Grid Applications Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views

More information

Crossing the Performance Chasm with OpenPOWER

Crossing the Performance Chasm with OpenPOWER Crossing the Performance Chasm with OpenPOWER Dr. Srini Chari Cabot Partners/IBM [email protected] #OpenPOWERSummit Join the conversation at #OpenPOWERSummit 1 Disclosure Copyright 215. Cabot Partners

More information

RSC presents SPbPU supercomputer center and new scientific research results achieved with RSC PetaStream massively parallel supercomputer

RSC presents SPbPU supercomputer center and new scientific research results achieved with RSC PetaStream massively parallel supercomputer Press contacts: Oleg Gorbachov Corporate Communications Director, RSC Group Cell: +7 (967) 052-50-85 Email: [email protected] Press Release RSC presents SPbPU supercomputer center and new scientific

More information

Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1

Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1 Introduction to High Performance Cluster Computing Cluster Training for UCL Part 1 What is HPC HPC = High Performance Computing Includes Supercomputing HPCC = High Performance Cluster Computing Note: these

More information

Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca

Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Carlo Cavazzoni CINECA Supercomputing Application & Innovation www.cineca.it 21 Aprile 2015 FERMI Name: Fermi Architecture: BlueGene/Q

More information

Parallel Software usage on UK National HPC Facilities 2009-2015: How well have applications kept up with increasingly parallel hardware?

Parallel Software usage on UK National HPC Facilities 2009-2015: How well have applications kept up with increasingly parallel hardware? Parallel Software usage on UK National HPC Facilities 2009-2015: How well have applications kept up with increasingly parallel hardware? Dr Andrew Turner EPCC University of Edinburgh Edinburgh, UK [email protected]

More information

Distributed File System Performance. Milind Saraph / Rich Sudlow Office of Information Technologies University of Notre Dame

Distributed File System Performance. Milind Saraph / Rich Sudlow Office of Information Technologies University of Notre Dame Distributed File System Performance Milind Saraph / Rich Sudlow Office of Information Technologies University of Notre Dame Questions to answer: Why can t you locate an AFS file server in my lab to improve

More information

www.thinkparq.com www.beegfs.com

www.thinkparq.com www.beegfs.com www.thinkparq.com www.beegfs.com KEY ASPECTS Maximum Flexibility Maximum Scalability BeeGFS supports a wide range of Linux distributions such as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a

More information

High Performance Computing in Aachen

High Performance Computing in Aachen High Performance Computing in Aachen Samuel Sarholz [email protected] aachen.de Center for Computing and Communication RWTH Aachen University HPC unter Linux Sep 15, RWTH Aachen Agenda o Hardware o Development

More information

Kashif Iqbal - PhD [email protected]

Kashif Iqbal - PhD Kashif.iqbal@ichec.ie HPC/HTC vs. Cloud Benchmarking An empirical evalua.on of the performance and cost implica.ons Kashif Iqbal - PhD [email protected] ICHEC, NUI Galway, Ireland With acknowledgment to Michele MicheloDo

More information

Accelerating CFD using OpenFOAM with GPUs

Accelerating CFD using OpenFOAM with GPUs Accelerating CFD using OpenFOAM with GPUs Authors: Saeed Iqbal and Kevin Tubbs The OpenFOAM CFD Toolbox is a free, open source CFD software package produced by OpenCFD Ltd. Its user base represents a wide

More information

Performance Characteristics of Large SMP Machines

Performance Characteristics of Large SMP Machines Performance Characteristics of Large SMP Machines Dirk Schmidl, Dieter an Mey, Matthias S. Müller [email protected] Rechen- und Kommunikationszentrum (RZ) Agenda Investigated Hardware Kernel Benchmark

More information

SURFsara HPC Cloud Workshop

SURFsara HPC Cloud Workshop SURFsara HPC Cloud Workshop doc.hpccloud.surfsara.nl UvA workshop 2016-01-25 UvA HPC Course Jan 2016 Anatoli Danezi, Markus van Dijk [email protected] Agenda Introduction and Overview (current

More information

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS) PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters from One Stop Systems (OSS) PCIe Over Cable PCIe provides greater performance 8 7 6 5 GBytes/s 4

More information