PRACE hardware, software and services. David Henty, EPCC, d.henty@epcc.ed.ac.uk



Similar documents
Cosmological simulations on High Performance Computers

Information about Pan-European HPC infrastructure PRACE. Vít Vondrák IT4Innovations

PRACE the European HPC Research Infrastructure. Carlos Mérida-Campos, Advisor of Spanish Member at PRACE Council

International High Performance Computing. Troels Haugbølle Centre for Star and Planet Formation Niels Bohr Institute PRACE User Forum

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

Supercomputing Resources in BSC, RES and PRACE

Relations with ISV and Open Source. Stephane Requena GENCI

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

Kriterien für ein PetaFlop System

e-infrastructures for Science and Industry

Access, Documentation and Service Desk. Anupam Karmakar / Application Support Group / Astro Lab

Extreme Scaling on Energy Efficient SuperMUC

The PRACE Project Applications, Benchmarks and Prototypes. Dr. Peter Michielse (NCF, Netherlands)

Pedraforca: ARM + GPU prototype

High Performance Computing at CEA

How Cineca supports IT

Workprogramme

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

HPC in Oil and Gas Exploration

Extreme Scale Compu0ng at LRZ

HPC technology and future architecture

Parallel Programming Survey

David Vicente Head of User Support BSC

globus online Globus Online for Research Data Management Rachana Ananthakrishnan Great Plains Network Annual Meeting 2013

Supercomputer Center Management Challenges. Branislav Jansík

Scyld Cloud Manager User Guide

IMPLEMENTING GREEN IT

Parallel Software usage on UK National HPC Facilities : How well have applications kept up with increasingly parallel hardware?

Grid Scheduling Dictionary of Terms and Keywords

Software services competence in research and development activities at PSNC. Cezary Mazurek PSNC, Poland

XSEDE Service Provider Software and Services Baseline. September 24, 2015 Version 1.2

STW Open Technology Programme. H2020 Future & Emerging Technology. and. GRANTS WEEK 2015 October 9 th

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

IT security concept documentation in higher education data centers: A template-based approach

CS 698: Special Topics in Big Data. Chapter 2. Computing Trends for Big Data

GTC Presentation March 19, Copyright 2012 Penguin Computing, Inc. All rights reserved

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing

Costs of air pollution from European industrial facilities an updated assessment

BSC - Barcelona Supercomputer Center

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Enterprise HPC & Cloud Computing for Engineering Simulation. Barbara Hutchings Director, Strategic Partnerships ANSYS, Inc.

SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center

e-infrastructure and related projects in PSNC

Ernst Rauch Munich Re 18 October 2011

MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper

OpenMP Programming on ScaleMP

3rd Party Audited Cloud Infrastructure SOC 1, Type II SOC 2, Type II ISO Annual 3rd party application Pen Tests.

Cloud Computing from an Institutional Perspective

Informationsaustausch für Nutzer des Aachener HPC Clusters

Shibboleth : An Open Source, Federated Single Sign-On System David E. Martin martinde@northwestern.edu

Strategic Analysis of the Global Automotive Market for IT Mobility Platforms

Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca

Economic and Social Council

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007

The Current Status of the EUMETNET Programme UNIDART

SURFsara HPC Cloud Workshop

Cisco Conference Connection

Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner

Grid Computing With FreeBSD

JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering

CaliberRM / LDAP Integration. CaliberRM

Transcription:

PRACE hardware, software and services David Henty, EPCC, d.henty@epcc.ed.ac.uk

Why? Weather, Climatology, Earth Science degree of warming, scenarios for our future climate. understand and predict ocean properties and variations weather and flood events Astrophysics, Elementary particle physics, Plasma physics systems, structures which span a large range of different length and time scales quantum field theories like QCD, ITER Material Science, Chemistry, Nanoscience understanding complex materials, complex chemistry, nanoscience the determination of electronic and transport properties Life Science system biology, chromatin dynamics, large scale protein dynamics, protein association and aggregation, supramolecular systems, medicine Engineering complex helicopter simulation, biomedical flows, gas turbines and internal combustion engines, forest fires, green aircraft, virtual power plant 2

Supercomputing Drives Science through Simulation Environment Weather/ Climatology Pollution / Ozone Hole Ageing Society Medicine Biology Materials/ Inf. Tech Spintronics Nano-science Energy Plasma Physics Fuel Cells 3

Sum of Performance per Country (TOP500) 4

Rationale Europe must maintain its high standards in computational science and engineering Europe has to guarantee independent access to HPCsystems of the highest performance class for all computational scientists in its member states Scientific Excellence requires peer review on European scale to foster best ideas and groups User requirements as to variety of architectures requires coordinated procurement EU and national governments have to establish robust and persistent funding scheme 5

HPC on ESFRI Roadmap 2006 First comprehensive definition of RIs at European level RIs are major pillars of the European Research Area A European HPC service strategic competitiveness attractiveness for researchers access based on excellence supporting industrial development 6

capability The ESFRI Vision for a European HPC service European HPC-facilities at the top of an HPC provisioning pyramid Tier-0: 3-6 European Centres for Petaflop Tier-0:? European Centres for Exaflop Tier-1: National Centres Tier-2: Regional/University Centres Creation of a European HPC ecosystem Scientific and industrial user communities HPC service providers on all tiers Grid Infrastructures The European HPC hard- and software industry Tier-0 Tier-1 Tier-2 PRACE DEISA/PRACE # of systems 7

PRACE in Europe 8

PRACE Timeline HPCEUR HET PRACE MoU PRACE Preparatory 2004 2005 2006 2007 2008 EU-Grant: INFSO-RI-211528, 10 Mio. Phase 23.4. 2010 PRACE Operation PRACE Implementation Phase (1IP, 2IP) 2009 2010 2011 2012 2013 PRACE (AISBL), a legal entity with (current) seat location in Brussels 9

Purpose of Workshop Introduce you to DECI-7 process Get you logged on to Tier-1 machines Make sure you can compile and run simple codes Inform you of the applications support available Get you started on your own codes 10

Timetable 13:30-13:45 Welcome and Introduction to SARA 13:45-14:30 PRACE hardware, software and services 14:30-15:30 Use of Certificates, Using gsi-ssh and gridftp 15:30-16:00 Coffee Break 16:00-17:30 Hands on session (practical examples) 09:30-10:15 PRACE support for the DECI projects 10:15-10:30 Remote Visualization 10:30-11:00 Hands on Sessions (users own application codes) 11:00-11:30 Coffee Break 11:30-12:30 Hands on Sessions (users own application codes) 12:30-13:30 Lunch 13:30-15:30 One-to-one sessions (optional) 11

Access to PRACE resources Regular calls for proposals see http://www.prace-ri.eu/hpc-access Successful projects allocated a maximum number of CPU hours given access for a limited period of time Linked calls can apply for Tier-0 or Tier-1 Tier-1 access is via DECI (continuation of DEISA scheme) active projects (you!) are DECI-7 call already open for DECI-8 starting May 2012 12

Tier-0 Systems IBM Blue Gene/P JUGENE (GCS@Jülich, Germany) Bull Bullx cluster CURIE (GENCI@CEA, France) Cray XE6 HERMIT (GCS@HLRS, Germany) SuperMUC (GCS@LRZ, Germany) MareNostrum (BSC, Spain) FERMI (CINECA, Italy) 13

Tier-1 Systems: specialist Cray XT4/5/6 and Cray XE6 EPCC (UK) KTH (Sweden) CSC (Finland) IBM Blue Gene/P IDRIS (France) RZG (Germany) NCSA (Bulgaria) IBM Power 6 RZG (Germany) SARA (The Netherlands) CINECA (Italy) 14

Tier-1 Systems: clusters FZJ (Germany, Bull Nehalem cluster) LRZ (Germany, Xeon cluster) HLRS (Germany, NEC Nehalem cluster plus GP/GPU cluster) CINES (France, SGI ICE 8200) BSC (Spain, IBM PowerPC) CINECA (Italy, Westmere plus GP/GPU cluster) PSNC (Poland, Bullx plus GP/GPU cluster and HP cluster) ICHEC (Ireland, SGI ICE 8200). 15

DECI Terminology Every project has a single HOME site one or more EXECUTION sites Home site main point of contact you will have a named person responsible for login accounts etc. Execution sites where you run your jobs 16

Accounts HOME site must apply for an account here you supply a certificate from your national Certificate Authority automatically propagated to execution site(s) may be additional info needed based on local arrangements e.g. signing up to codes of conduct EXECUTION sites same user name as home site recommended access is via gsissh. some sites may support ssh 17

Security Infrastructure Standard public/private key setup private key: only owner knows public key: known to everyone one encrypts, other decrypts: authentication and privacy figure by Borja Sotomayor, http://gdp.globus.org/gt4- tutorial/multiplehtml/ch09s03.html from J. Schopf, Globus Alliance

Certificates Similar to passport or driver s licence All PRACE users have an X509 Certificate certified by their national certificate authority have special temporary certificates for training sessions Enables secure authentication figure by Rachana Ananthakrishnan (from J. Schopf, Globus Alliance)

Managing certificates Certificate must be installed on your local machine protected by a private password/passphrase List of user certificates is held by PRACE central LDAP database Lightweight Directory Access Protocol PRACE sites synchronise with the LDAP at regular intervals Ask your home site if you have problems!

CPU accounted in standard core-hours e.g. conversion factors for DEISA were: CPU normalisation AMD Opt@2.2 1.4 BGP@0.85 0.33 I2 DC@1.6 1.1 Intel Harpertown@2.5 1.4 Intel Harpertown@3 1.6 Intel Nehalem@2.8 2.8 Intel Nehalem@2.93 3 Intel Westmere EP@2.67 2.7 Intel Westmere EX@2.4 2.6 Intel Westmere EX@2.67 2.7 NEC SX8 6 NEC SX9 36 P4+@1.5 0.88 P6@4.7 3 PPC@2.3 0.8 X2 4 XE6 12C@2.1 1.25 XEON X5560@2.80 2.8 XEON X5570@2.93 3 XT5 CSCS 1.2 XT5 DC@2.3 1.4 XT5 DC@2.7 1.4 XT6 1.25 21

Current status Tier-1 systems new to PRACE DECI-7 is the first PRACE DECI integration may not be complete for all sites Training accounts we provide temporary accounts for today pr1utrxx (XX = 11, 12,, 35) notional home site is EPCC (HECToR) temporary certificate from ECMWF CA execution sites are at SARA, CSC and CINECA chosen to span a range of architectures 22

DECI-7 Statistics 54 applications for 200 million standard core-hours 35 successful projects allocated 90 million standard core-hours Start date: 1 st November 2011 End date: 31 st October 2012 you MUST use your CPU allocation in this period Final reports must be submitted due within 3 months of project completion 23

Useful resources Documentation currently provided via DEISA site http://www.deisa.eu/usersupport/user-documentation migrating to http://www.prace-ri.eu Reporting problems email support@prace-ri.eu Trouble Ticket System will be opened up to users in the future Monitoring CPU usage done via DART tool home site can advise on installing this 24

www.unicore.eu Run a graphical client on local machine uniform interface to different HPC systems support for data transfer, workflows etc. 25

Meal tonight@8pm: Brasserie Harkema 26