Using the SLURM Job Scheduler

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Using the SLURM Job Scheduler"

Transcription

1 Using the SLURM Job Scheduler [web] [ ] portal.biohpc.swmed.edu 1 Updated for

2 Overview Today we re going to cover: Learn the basics of SLURM s architecture and demons Learn to how to use a basic set of commands Learn how to write a sbatch script for job submission This is only an introduction, but it should provide you a good start. 2

3 What is SLURM Simple Linux Utility for Resource Management - Started as a simple resource manger for Linux clusters - About 500,000 lines of C code today The glue for a parallel computer to execute parallel jobs - Make a parallel computer as almost easy to use as a PC - Typically use MPI to manage communications within the parallel program 3

4 Cluster Architecture (https://computing.llnl.gov/linux/slurm/overview.html) 4

5 BioHPC (current system and partition) storage systems: (Lysosome & Endosome ) /home2 /project /work Computer nodes: partitions Login via SSH to nucleus.biohpc.swmed.edu Nucleus005 (Login Node) Submit job via SLURM 64GB* : Nucleus GB : Nucleus GB : Nucleus (GPU : Nucleus ) 384GB : Nucleus050 super : Nucleus

6 BioHPC (future partition) 64GB* : 8 nodes -> 0 nodes 128GB : 16 nodes -> 32 nodes 256GB : 8 nodes -> 32 nodes 384GB : 1 nodes -> 2 nodes super : 41 nodes -> 66 nodes GPU : 2 nodes -> 8 nodes total : 41 nodes -> 74 nodes 6

7 Role of SLURM (resource management & job scheduling) Fair-share resource allocations Allocate resources within a cluster - Supports hierarchical communications with configurable format - For each node, socket : cores per socket : threads per core = 2 : 8 : 2, therefore, total available parallel tasks within a single node = 2*8*2 = 32 When there is more work than resources, SLURM manages queues of work - request resource based on your needs - resource limits by queue, user, etc. 7

8 SLURM commands SLURM commands - Before job submission : sinfo, squeue, sview, smap - Submit a job : sbatch, srun, sallocate, sattach - During job running : squeue, sview, scontrol - After job completed : sacct 8

9 Commands: General Information Man pages available for all commands --help option prints brief descriptions of all options --usage option prints a list of the options Almost all options have two formats - A single letter option (e.g. -p super ) - A verbose option (e.g. --partition=super ) 9

10 sinfo DEMO Reports status of nodes of partitions (partition-oriented format by default) >sinfo --Node (report status in node-oriented form) >sinfo p 256GB (report status of nodes in partition 256GB ) 10

11 Life cycle of a job after submission Submit a Job Pending Configuration (node booting) Resizing Running Suspended Completing * BioHPC Policy : 16 running nodes per user Cancelled (scancel) Completed (zero exit code) Failed (non-zero exit code) Time out (time limit reached) 11

12 Demo: submit a job and use squeue, and scontrol to check job >squeue >scontrol show job {jobid} 12

13 Demo: cancel job with scancle, and use sacct to check previous jobs >scancel {jobid} >sacct u {username} >sacct j {jobid} 13

14 List of SLURM commands sbatch : Submit a script for later execution (batch mode) salloc : Create job allocation and start a shell to use it (interactive mode) srun : Create a job allocation (if needed) and launch a job step (typically an MPI job) sattach : Connect stdin/stdout/stderr for an existing job or job step squeue : Report job and job step status smap : Report system, job or step status with topology, less functionality than sview sview : Report/update system, job, step, partition or reservation status (GTK-based GUI) scontrol : Administrator tool to view/update system, job, step, partition or reservation status sacct : Report accounting information by individual job and job step 14

15 List of valid job states PENDING (PD) : Job is awaiting resource allocation RUNNING (R) : Job is currently has an allocation SUSPENDED (S) : Job has an allocation, but execution has been suspended COMPLETING (CG) : Job is in the process of completing COMPLETED (CD) : Job has terminated all processes on all nodes COMFIGURING (CF) : Job has been allocated resources, waiting for them to be ready for use CANCELLED (CA) : Job was explicitly cancelled by the user or system administrator FAILED (F) : Job terminated with non-zero code or other failure condition TIMEOUT (TO) : job terminated upon reaching its time limit NODE_FAIL (NF) : Job terminated with non-zero exit code or other failure condition 15

16 Demo 01 : submit a single job to single node #!/bin/bash #SBATCH --job-name=singlematlab #SBATCH --partition=super #SBATCH --nodes=1 #SBATCH --time=0-00:00:30 #SBATCH --output=single.%j.out #SBATCH --error=single.%j.time module add matlab/2013a format: D-H:M:S set up SLURM environment load software (export path & library) time(matlab -nodisplay -nodesktop -nosplash -r "forbiohpctestplot(1), exit") alternatively you may use: matlab nodisplay nodesktop nosplash < script.m Command (s) you want to execute Running time ~3.046 s 16

17 More sbatch options #SBATCH --begin=now+1hour Defer the allocation of the job until the specified time #SBATCH mail-type=all Notify user by when certain event types occur (BEGIN, END, FAIL, REQUEUE, etc.) #SBATCH Use to reveive notification of state changes as defined by mail-type #SBATCH --mem=65536 Specify the real memory required per node in MegaBytes. Default value is DefMemPerNode (UNLIMITED) #SBATCH --nodelist=nucleus0[10-20] Request a specific list of node names. The order of the node names in the list is not important, the node names will be sorted by SLURM 17

18 Demo 02 : submit multiple tasks to single node (sequential tasks) #!/bin/bash #SBATCH --job-name=multimatlab #SBATCH --partition=super #SBATCH --nodes=1 #SBATCH --time=0-00:00:30 #SBATCH --output=multiplejob.%j.out #SBATCH --error=smultiplejob.%j.time script.m module add matlab/2013a time(sh script.m) Running time ~ s 18

19 Demo 03 : submit multiple tasks to single node (simultaneous tasks) version 1 #!/bin/bash #SBATCH --job-name=multicorematlab #SBATCH --partition=super #SBATCH --nodes=1 #SBATCH --time=0-00:00:30 #SBATCH --output=multiplecore.%j.out #SBATCH --error=smultiplecore.%j.time script.m module add matlab/2013a time(sh script.m) Running time ~6.156 s 19

20 Demo 04 : submit multiple jobs to single node (simultaneous tasks) version 2 #!/bin/bash #SBATCH --job-name=multi_matlab #SBATCH --partition=super #SBATCH --nodes=1 #SBATCH --time=0-00:00:30 #SBATCH --output=multiplecoreadv.%j.out #SBATCH --error=smultiplecoreadv.%j.time module add matlab/2013a time(sh script.m) script.m Running time ~4.904 s 20

21 Demo 05 : submit multiple jobs to single node with srun #!/bin/bash #SBATCH --job-name=srunsinglenodematlab #SBATCH --partition=super #SBATCH --nodes=1 #SBATCH ntasks=16 #SBATCH --time=0-00:00:30 #SBATCH --output=srunsinglenode.%j.out #SBATCH --error=srunsinglenode.%j.time srun will create a resource relocation to run the parallel job SLURM_LOCALID : environment variable; Node local task ID for the process within a job. ( zero-based ) module add matlab/2013a time(sh script.m) script.m Running time ~4.596 s 21

22 Why srun? 4 slurmd 3 srun slurmctld slurmstepd Node 0 srun will create a resource relocation to run the parallel job SLURM_LOCALID : environment variable; Node local task ID for the process within a job. ( zero-based ) 22

23 Demo 06 : submit multiple jobs to multi-node with srun #!/bin/bash #SBATCH --job-name=srun2nodesmatlab #SBATCH --partition=super #SBATCH --nodes=2 #SBATCH ntasks=16 #SBATCH --time=0-00:00:30 #SBATCH --output=srun2nodes.%j.out #SBATCH --error=srun2nodes.%j.time SLURM_NODEID : the relative node ID of the current node (zero-based) SLURM_NNODES : Total number of nodes in the job s resource allocation SLURM_NTASKS : Total number of processes in the current job module add matlab/2013a time(sh script.m) script.m Running time ~4.741 s 23

24 Demo 06 : submit multiple tasks to multi-node with srun Matrix 1-8 executed on Nucleus015 Matrix 9-16 executed on Nucleus016 Inside each node, job execution is out-of-order 24

25 Demo 07 : submit job with dependency >sbatch test_srun2nodes.sh >sbatch depend={jobid} gen_gif.sh 25

26 Parallelize your job At thread level - Shared Memory: multiple processors can operate independently but share the same memory - multi-threading - Fast but lower scalability (can not cross the node) e.g.: POSIX Threads, usually referred to as Pthreads - Implemented with a <pthread.h> header and a thread library - For C programming language - Thread management creating, joining threads etc. 26

27 Demo 08 : submit multi-core job to single node #!/bin/bash #SBATCH --job-name=phenix #SBATCH --partition=super #SBATCH --nodes=1 #SBATCH ntasks=30 not necessary need to be specified here #SBATCH --time=0-20:00:00 #SBATCH --output=phenix.%j.out #SBATCH --error=phenix.%j.time Think of your job at a scientific aspect. If this scientific problem can be solved in parallel. module add phenix/1.9 phenix.den_refine model.pdb data.mtz nproc=30 Check with the software or read the manual to see if it is able to use multithreading. Try to use proper parameters If you need help, send to 27

28 Parallelize your job At node level - Distributed Memory - Processors have their own local memory - Memory is scalable with number of processors Q: How does a processor access the memory of another node? e.g. Message Passing Interface - Scales to over 100k processors - Distributed - Library of functions (for C, C++, Fortran), not a new language 28

29 Demo 09 : submit multi-node job with message passing #!/bin/bash #SBATCH --job-name=mpi_test #SBATCH --partition=super #SBATCH --nodes=2 #SBATCH ntasks=8 #SBATCH --time=0-00:00:30 #SBATCH --output=mpi_test.%j.out module add mvapich2/gcc/1.9 output mpirun./mpi_only total tasks / number of Node <= 32 memory needed for each task * tasks on each node <= 64GB/128GB/256GB/384GB 29

30 Demo 10 : submit multi-node job (e.g.: MPI-pthread) #!/bin/bash output #SBATCH --job-name=mpi_pthread #SBATCH --partition=super #SBATCH --nodes=2 #SBATCH ntasks=8 #SBATCH --time=0-00:00:30 #SBATCH --output=mpi_pthread.%j.out module add mvapich2/gcc/1.9 let NUM_THREADS=$SLURM_CPU_ON_NODE/($SLURM_NTASKS/$SLURM_NNODES) mpirun./mpi_pthread $NUM_THREADS more total MPI tasks * thread per tasks/ number of Node <= 32 memory needed for each MPI task * MPI tasks on each node <= 64GB/128GB/256GB/384GB similar strategy for MPI-openMP code, let OMP_NUM_THREAS =$NUM_THREADS 30

31 Demo 11: Submit a GPU job (e.g.: CUDA) #!/bin/bash #SBATCH --job-name=cuda_test #SBATCH --partition=gpu #SBATCH --gres=gpu:1 #SBATCH --time=0-00:10:00 #SBATCH output=cuda.%j.out #SBATCH --error=cuda.%j.err module add cuda65 Jobs will not be allocated any generic resources unless specifically requested at job submit time. Using the gres option supported by sbatch and srun. Format: --gres=gpu:[n], where n is the number of GPUs Use GPU partition A(320, 10240) B(10240, 320)./matrixMul wa=320 ha=10240 wb=10240 hb=320 31

32 Getting Effective Help the ticket system: What is the problem? Provide any error message, and diagnostic output you have When did it happen? What time? Cluster or client? What job id? How did you run it? What did you run, what parameters, what do they mean? Any unusual circumstances? Have you compiled your own software? Do you customize startup scripts? Can we look at your scripts and data? Tell us if you are happy for us to access your scripts/data to help troubleshoot. 32

SLURM Workload Manager

SLURM Workload Manager SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux

More information

General Overview. Slurm Training15. Alfred Gil & Jordi Blasco (HPCNow!)

General Overview. Slurm Training15. Alfred Gil & Jordi Blasco (HPCNow!) Slurm Training15 Agenda 1 2 3 About Slurm Key Features of Slurm Extending Slurm Resource Management Daemons Job/step allocation 4 5 SMP MPI Parametric Job monitoring Accounting Scheduling Administration

More information

Until now: tl;dr: - submit a job to the scheduler

Until now: tl;dr: - submit a job to the scheduler Until now: - access the cluster copy data to/from the cluster create parallel software compile code and use optimized libraries how to run the software on the full cluster tl;dr: - submit a job to the

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

Submitting batch jobs Slurm on ecgate. Xavi Abellan xavier.abellan@ecmwf.int User Support Section

Submitting batch jobs Slurm on ecgate. Xavi Abellan xavier.abellan@ecmwf.int User Support Section Submitting batch jobs Slurm on ecgate Xavi Abellan xavier.abellan@ecmwf.int User Support Section Slide 1 Outline Interactive mode versus Batch mode Overview of the Slurm batch system on ecgate Batch basic

More information

Introduction to parallel computing and UPPMAX

Introduction to parallel computing and UPPMAX Introduction to parallel computing and UPPMAX Intro part of course in Parallel Image Analysis Elias Rudberg elias.rudberg@it.uu.se March 22, 2011 Parallel computing Parallel computing is becoming increasingly

More information

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu An introduction to compute resources in Biostatistics Chris Scheller schelcj@umich.edu 1. Resources 1. Hardware 2. Account Allocation 3. Storage 4. Software 2. Usage 1. Environment Modules 2. Tools 3.

More information

Managing GPUs by Slurm. Massimo Benini HPC Advisory Council Switzerland Conference March 31 - April 3, 2014 Lugano

Managing GPUs by Slurm. Massimo Benini HPC Advisory Council Switzerland Conference March 31 - April 3, 2014 Lugano Managing GPUs by Slurm Massimo Benini HPC Advisory Council Switzerland Conference March 31 - April 3, 2014 Lugano Agenda General Slurm introduction Slurm@CSCS Generic Resource Scheduling for GPUs Resource

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

Exercises for job submission with SLURM

Exercises for job submission with SLURM Exercises for job submission with SLURM Conventions Fonts This is the font that is used for general information throughout this guide. Text with a shaded background like this contains instructions which

More information

Getting started with the new Oscar cluster

Getting started with the new Oscar cluster Getting started with the new Oscar cluster Why a new Oscar? Improvements: -Upgrading the Linux operating system to CentOS 6.3 -Replacing the existing scheduling and batch system (Torque/Moab) with a new

More information

Workstations & Thin Clients

Workstations & Thin Clients 1 Workstations & Thin Clients Overview Why use a BioHPC computer? System Specs Network requirements OS Tour Running Code Locally Submitting Jobs to the Cluster Run Graphical Jobs on the Cluster Use Windows

More information

Using StarCCM+ at the UB CCR

Using StarCCM+ at the UB CCR Using StarCCM+ at the UB CCR About This Document This document provides a step-by-step tutorial for using the StarCCM + graphical user interface (GUI) on the UB CCR cluster and viz node front-end. It also

More information

Slurm Workload Manager Architecture, Configuration and Use

Slurm Workload Manager Architecture, Configuration and Use Slurm Workload Manager Architecture, Configuration and Use Morris Jette jette@schedmd.com Goals Learn the basics of SLURM's architecture, daemons and commands Learn how to use a basic set of commands Learn

More information

Matlab on a Supercomputer

Matlab on a Supercomputer Matlab on a Supercomputer Shelley L. Knuth Research Computing April 9, 2015 Outline Description of Matlab and supercomputing Interactive Matlab jobs Non-interactive Matlab jobs Parallel Computing Slides

More information

Biowulf2 Training Session

Biowulf2 Training Session Biowulf2 Training Session 9 July 2015 Slides at: h,p://hpc.nih.gov/docs/b2training.pdf HPC@NIH website: h,p://hpc.nih.gov System hardware overview What s new/different The batch system & subminng jobs

More information

To connect to the cluster, simply use a SSH or SFTP client to connect to:

To connect to the cluster, simply use a SSH or SFTP client to connect to: RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, cluster-head.ce.rit.edu, serves as the master controller or

More information

Batch Usage on JURECA Introduction to Slurm. Nov 2015 Chrysovalantis Paschoulas c.paschoulas@fz-juelich.de

Batch Usage on JURECA Introduction to Slurm. Nov 2015 Chrysovalantis Paschoulas c.paschoulas@fz-juelich.de Batch Usage on JURECA Introduction to Slurm Nov 2015 Chrysovalantis Paschoulas c.paschoulas@fz-juelich.de Batch System Concepts (1) A cluster consists of a set of tightly connected "identical" computers

More information

Introduction to Supercomputing with Janus

Introduction to Supercomputing with Janus Introduction to Supercomputing with Janus Shelley Knuth shelley.knuth@colorado.edu Peter Ruprecht peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Who is CU Research Computing? What is a supercomputer?

More information

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing

More information

Job Scheduling Using SLURM

Job Scheduling Using SLURM Job Scheduling Using SLURM Morris Jette jette@schedmd.com Goals Learn the basics of SLURM's architecture, daemons and commands Learn how to use a basic set of commands Learn how to build, configure and

More information

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Miami University RedHawk Cluster Working with batch jobs on the Cluster Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

Standard queue status command supplied by PBS. See man qstat for details of options.

Standard queue status command supplied by PBS. See man qstat for details of options. PBS/ PBS Pro User Guide Most jobs will require greater resources than are available on individual nodes. All jobs must be scheduled via the batch job system. The batch job system in use is the PBS Pro

More information

Slurm Native Workload Management on Cray Systems

Slurm Native Workload Management on Cray Systems Slurm Native Workload Management on Cray Systems Morris Jette jette@schedmd.com Outline Slurm operation with Cray ALPS resource manager Native Slurm design New Slurm capabilities (for all most system types)

More information

An introduction to Fyrkat

An introduction to Fyrkat Cluster Computing May 25, 2011 How to get an account https://fyrkat.grid.aau.dk/useraccount How to get help https://fyrkat.grid.aau.dk/wiki What is a Cluster Anyway It is NOT something that does any of

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

May 24, 2016 Herwig Zilken

May 24, 2016 Herwig Zilken Mitglied der Helmholtz-Gemeinschaft Remote Visualization on JURECA May 24, 2016 Herwig Zilken May 24, 2016 Folie 1 Remote Data Visualization on JURECA This talk is about remote data visualization Data

More information

Simulation of batch scheduling using real production-ready software tools

Simulation of batch scheduling using real production-ready software tools Simulation of batch scheduling using real production-ready software tools Alejandro Lucero 1 Barcelona SuperComputing Center Abstract. Batch scheduling for high performance cluster installations has two

More information

Tutorial-4a: Parallel (multi-cpu) Computing

Tutorial-4a: Parallel (multi-cpu) Computing HTTP://WWW.HEP.LU.SE/COURSES/MNXB01 Introduction to Programming and Computing for Scientists (2015 HT) Tutorial-4a: Parallel (multi-cpu) Computing Balazs Konya (Lund University) Programming for Scientists

More information

Tamás Budavári / The Johns Hopkins University

Tamás Budavári / The Johns Hopkins University PRACTICAL SCIENTIFIC ANALYSIS OF BIG DATA RUNNING IN PARALLEL / The Johns Hopkins University 2 Parallelism Data parallel Same processing on different pieces of data Task parallel Simultaneous processing

More information

Compute Cluster Documentation

Compute Cluster Documentation Drucksachenkategorie Compute Cluster Documentation Compiled by Martin Geier (DLR FA-STM) Authors Michael Schäfer (INIT GmbH) Martin Geier (DLR FA-STM) Date 22.05.2015 Table of contents Table of contents...

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

A High Performance Computing Scheduling and Resource Management Primer

A High Performance Computing Scheduling and Resource Management Primer LLNL-TR-652476 A High Performance Computing Scheduling and Resource Management Primer D. H. Ahn, J. E. Garlick, M. A. Grondona, D. A. Lipari, R. R. Springmeyer March 31, 2014 Disclaimer This document was

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

Abel tutorial for beginners

Abel tutorial for beginners Abel tutorial for beginners Katerina Michalickova, Nikolai Vazov, Miquel Oliveira April 16 th, 2013 The Research Computing Service Group http://www.uio.no/hpc 1 Contents 1.Resources... 3 2.About this tutorial...

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

Cloud Bursting with SLURM and Bright Cluster Manager. Martijn de Vries CTO

Cloud Bursting with SLURM and Bright Cluster Manager. Martijn de Vries CTO Cloud Bursting with SLURM and Bright Cluster Manager Martijn de Vries CTO Architecture CMDaemon 2 Management Interfaces Graphical User Interface (GUI) Offers administrator full cluster control Standalone

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz)

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team (support@nesi.org.nz) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

Running applications on the Cray XC30 4/12/2015

Running applications on the Cray XC30 4/12/2015 Running applications on the Cray XC30 4/12/2015 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch jobs on compute nodes

More information

Running COMSOL in parallel

Running COMSOL in parallel Running COMSOL in parallel COMSOL can run a job on many cores in parallel (Shared-memory processing or multithreading) COMSOL can run a job run on many physical nodes (cluster computing) Both parallel

More information

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es)

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es) Microsoft HPC V 1.0 José M. Cámara (checam@ubu.es) Introduction Microsoft High Performance Computing Package addresses computing power from a rather different approach. It is mainly focused on commodity

More information

8/15/2014. Best Practices @OLCF (and more) General Information. Staying Informed. Staying Informed. Staying Informed-System Status

8/15/2014. Best Practices @OLCF (and more) General Information. Staying Informed. Staying Informed. Staying Informed-System Status Best Practices @OLCF (and more) Bill Renaud OLCF User Support General Information This presentation covers some helpful information for users of OLCF Staying informed Aspects of system usage that may differ

More information

HPC Wales Skills Academy Course Catalogue 2015

HPC Wales Skills Academy Course Catalogue 2015 HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses

More information

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

Using the Windows Cluster

Using the Windows Cluster Using the Windows Cluster Christian Terboven terboven@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster

More information

Parallel Debugging with DDT

Parallel Debugging with DDT Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece

More information

Mitglied der Helmholtz-Gemeinschaft. Remote Visualization. November 27, 2015 Dr. Herwig Zilken

Mitglied der Helmholtz-Gemeinschaft. Remote Visualization. November 27, 2015 Dr. Herwig Zilken Mitglied der Helmholtz-Gemeinschaft Remote Visualization November 27, 2015 Dr. Herwig Zilken Visualization on JURECA This talk is about remote data visualization is stored on central GPFS Visualization

More information

SLURM: Advanced Usage. Rod Schultz

SLURM: Advanced Usage. Rod Schultz SLURM: Advanced Usage Rod Schultz email: rod.schultz@bull.com Outline Dynamic Allocation (Growing Shrinking) Generic Resources Reservations 2 Bull, 2011 SLURM User Group 2011 Dynamic Allocation (Shrink)

More information

PROGRAMMING MODEL EXAMPLES

PROGRAMMING MODEL EXAMPLES ( Cray Inc 2016) PROGRAMMING MODEL EXAMPLES DEMONSTRATION EXAMPLES OF VARIOUS PROGRAMMING MODELS OVERVIEW Building an application to use multiple processors (cores, cpus, nodes) can be done in various

More information

Debugging with TotalView

Debugging with TotalView Tim Cramer cramer@rz.rwth-aachen.de Rechen- und Kommunikationszentrum (RZ) Why to use a Debugger? If your program goes haywire, you may... ( wand (... buy a magic... read the source code again and again

More information

Production Environment

Production Environment Production Environment Introduction to Marconi HPC Cluster, for users and developers HPC User Support @ CINECA 27/06/2016 PBS Scheduler The production environment on Marconi is based on a batch system,

More information

Introduction to Sun Grid Engine (SGE)

Introduction to Sun Grid Engine (SGE) Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems

More information

Introduction to Pacman

Introduction to Pacman Introduction to Pacman David Newman (From slides by Tom Logan) Overview Hardware Programming Environment Compilers Queueing System PACMAN Penguin Computing Opteron Cluster 12 Login Nodes: 2- Six core 2.2

More information

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz)

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz) Center for e-research (eresearch@nesi.org.nz) Outline 1 About Us About CER and NeSI The CS Team Our Facilities 2 Key Concepts What is a Cluster Parallel Programming Shared Memory Distributed Memory 3 Using

More information

Running Jobs on Blue Waters. Jing Li, Omar Padron

Running Jobs on Blue Waters. Jing Li, Omar Padron Running Jobs on Blue Waters Jing Li, Omar Padron Jobs on Blue Waters Jobs on Blue Waters are managed through the use of: Resource manager: TORQUE Workload manager: Moab Commands for managing jobs on Blue

More information

Job Scheduling with Moab Cluster Suite

Job Scheduling with Moab Cluster Suite Job Scheduling with Moab Cluster Suite IBM High Performance Computing February 2010 Y. Joanna Wong, Ph.D. yjw@us.ibm.com 2/22/2010 Workload Manager Torque Source: Adaptive Computing 2 Some terminology..

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

The Application Level Placement Scheduler

The Application Level Placement Scheduler The Application Level Placement Scheduler Michael Karo 1, Richard Lagerstrom 1, Marlys Kohnke 1, Carl Albing 1 Cray User Group May 8, 2006 Abstract Cray platforms present unique resource and workload management

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

Quick Tutorial for Portable Batch System (PBS)

Quick Tutorial for Portable Batch System (PBS) Quick Tutorial for Portable Batch System (PBS) The Portable Batch System (PBS) system is designed to manage the distribution of batch jobs and interactive sessions across the available nodes in the cluster.

More information

Bright Cluster Manager 5.2. User Manual. Revision: 3324. Date: Fri, 30 Nov 2012

Bright Cluster Manager 5.2. User Manual. Revision: 3324. Date: Fri, 30 Nov 2012 Bright Cluster Manager 5.2 User Manual Revision: 3324 Date: Fri, 30 Nov 2012 Table of Contents Table of Contents........................... i 1 Introduction 1 1.1 What Is A Beowulf Cluster?..................

More information

Remote & Collaborative Visualization. Texas Advanced Compu1ng Center

Remote & Collaborative Visualization. Texas Advanced Compu1ng Center Remote & Collaborative Visualization Texas Advanced Compu1ng Center So6ware Requirements SSH client VNC client Recommended: TigerVNC http://sourceforge.net/projects/tigervnc/files/ Web browser with Java

More information

How to Run Parallel Jobs Efficiently

How to Run Parallel Jobs Efficiently How to Run Parallel Jobs Efficiently Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education May 9, 2013 1 The big picture: running parallel jobs on Hoffman2

More information

PBS "Using PBS" Training Course. Customer Education

PBS Using PBS Training Course. Customer Education Customer Education "Using PBS" Class Syllabus Note: This syllabus is periodically updated/revised based on input from customer sites and training feedback. Actual course syllabus may differ slightly from

More information

Installing and running COMSOL on a Linux cluster

Installing and running COMSOL on a Linux cluster Installing and running COMSOL on a Linux cluster Introduction This quick guide explains how to install and operate COMSOL Multiphysics 5.0 on a Linux cluster. It is a complement to the COMSOL Installation

More information

Slurm License Management

Slurm License Management Slurm License Management Slurm 2013 User Group Bill Brophy, Bull 1 Background Software and the associated Licenses are EXPENSIVE Nearly $160 billion will be spent by American companies on software purchases

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

SLURM Resources isolation through cgroups. Yiannis Georgiou email: yiannis.georgiou@bull.fr Matthieu Hautreux email: matthieu.hautreux@cea.

SLURM Resources isolation through cgroups. Yiannis Georgiou email: yiannis.georgiou@bull.fr Matthieu Hautreux email: matthieu.hautreux@cea. SLURM Resources isolation through cgroups Yiannis Georgiou email: yiannis.georgiou@bull.fr Matthieu Hautreux email: matthieu.hautreux@cea.fr Outline Introduction to cgroups Cgroups implementation upon

More information

HPCC Hrothgar Getting Started User Guide Submitting and Running Jobs

HPCC Hrothgar Getting Started User Guide Submitting and Running Jobs HPCC Hrothgar Getting Started User Guide Submitting and Running Jobs High Performance Computing Center Texas Tech University HPCC Hrothgar 2 Table of Contents 1. Submitting and Running Jobs on HROTHGAR...

More information

Introduction to Computing

Introduction to Computing Introduction to Computing on Raad Information Technology, Research Computing November 2013 Outline NOTE: It is highly recommended (although not required) that attendees of this session should already have

More information

Martinos Center Compute Clusters

Martinos Center Compute Clusters Intro What are the compute clusters How to gain access Housekeeping Usage Log In Submitting Jobs Queues Request CPUs/vmem Email Status I/O Interactive Dependencies Daisy Chain Wrapper Script In Progress

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

HPC-Nutzer Informationsaustausch. The Workload Management System LSF

HPC-Nutzer Informationsaustausch. The Workload Management System LSF HPC-Nutzer Informationsaustausch The Workload Management System LSF Content Cluster facts Job submission esub messages Scheduling strategies Tools and security Future plans 2 von 10 Some facts about the

More information

Using the general purpose computing clusters at EPFL

Using the general purpose computing clusters at EPFL Using the general purpose computing clusters at EPFL scitas.epfl.ch March 5, 2015 1 / 1 Bellatrix Frontend at bellatrix.epfl.ch 16 x 2.2 GHz cores per node 424 nodes with 32GB Infiniband QDR network The

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

2011 European HyperWorks Technology Conference. Vladi Nosenzo, Roberto Vadori

2011 European HyperWorks Technology Conference. Vladi Nosenzo, Roberto Vadori 2011 European HyperWorks Technology Conference Vladi Nosenzo, Roberto Vadori 20 Novembre, 2010 2011 ABSTRACT The work described below starts from an idea of a previous experience of Reply, developed in

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

Overview. Introduction to Pacman. Login Node Usage. Tom Logan. PACMAN Penguin Computing Opteron Cluster

Overview. Introduction to Pacman. Login Node Usage. Tom Logan. PACMAN Penguin Computing Opteron Cluster Overview Introduction to Pacman Tom Logan Hardware Programming Environment Compilers Queueing System PACMAN Penguin Computing Opteron Cluster 12 Login Nodes: 2- Six core 2.2 GHz AMD Opteron Processors;

More information

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming Overview Lecture 1: an introduction to CUDA Mike Giles mike.giles@maths.ox.ac.uk hardware view software view Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Lecture 1 p.

More information

Slurm Fault Tolerant Workload Management

Slurm Fault Tolerant Workload Management Slurm Fault Tolerant Workload Management 19 October 2013 David Bigagli, Morris Jette [david,jette]@schedmd.com Motivation Failures in large computers are inevitable. Bigger the size of the parallel job

More information

System Management for IRIX 6.5

System Management for IRIX 6.5 System Management for IRIX 6.5 MPI, Array Services, Miser, NQE, Checkpoint/Restart NQE Development Team Strategic Software Organization Silicon Graphics, Inc. ABSTRACT: MPI, Array Services, Miser, NQE,

More information

Debugging with TotalView

Debugging with TotalView Tim Cramer 17.03.2015 IT Center der RWTH Aachen University Why to use a Debugger? If your program goes haywire, you may... ( wand (... buy a magic... read the source code again and again and...... enrich

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

CHEOPS Cologne High Efficient Operating Platform for Science Brief Instructions

CHEOPS Cologne High Efficient Operating Platform for Science Brief Instructions CHEOPS Cologne High Efficient Operating Platform for Science Brief Instructions (Version: 07.10.2013) Foto: Thomas Josek/JosekDesign Viktor Achter Dr. Stefan Borowski Lech Nieroda Dr. Lars Packschies Volker

More information

PSE Molekulardynamik

PSE Molekulardynamik OpenMP, bigger Applications 12.12.2014 Outline Schedule Presentations: Worksheet 4 OpenMP Multicore Architectures Membrane, Crystallization Preparation: Worksheet 5 2 Schedule 10.10.2014 Intro 1 WS 24.10.2014

More information

Implementation of a compute cluster for R/ BUGS simulations

Implementation of a compute cluster for R/ BUGS simulations Implementation of a compute cluster for R/ BUGS simulations Maud Destrée Alain Dalis Will Talbot Charles Castelain UCB IT Department Lou, living with epilepsy 26 mai 10 Agenda 2! Background! Goal & Requirements!

More information

Guardian DRC Design Rule Checker. Technical Description

Guardian DRC Design Rule Checker. Technical Description Guardian DRC Design Rule Checker Technical Description Outline of Presentation Overview Guardian DRC Process/Data Flow Friendly User Interface Start DRC Rule Script Multi-thread DRC Rules Processing Hierarchical

More information

locuz.com HPC App Portal V2.0 DATASHEET

locuz.com HPC App Portal V2.0 DATASHEET locuz.com HPC App Portal V2.0 DATASHEET Ganana HPC App Portal makes it easier for users to run HPC applications without programming and for administrators to better manage their clusters. The web-based

More information

RWTH GPU Cluster. Sandra Wienke wienke@rz.rwth-aachen.de November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

RWTH GPU Cluster. Sandra Wienke wienke@rz.rwth-aachen.de November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky RWTH GPU Cluster Fotos: Christian Iwainsky Sandra Wienke wienke@rz.rwth-aachen.de November 2012 Rechen- und Kommunikationszentrum (RZ) The RWTH GPU Cluster GPU Cluster: 57 Nvidia Quadro 6000 (Fermi) innovative

More information

Introduction Workshop 11th 12th November 2013

Introduction Workshop 11th 12th November 2013 Introduction Workshop 11th 12th November Lecture III: Advanced Usage and Performance Dr. Andreas Wolf Gruppenleiter Hochleistungsrechnen Hochschulrechenzentrum Overview Job Dependencies and Queues Hints

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information