Until now: tl;dr: - submit a job to the scheduler
|
|
- Eugenia Barton
- 8 years ago
- Views:
Transcription
1
2 Until now: - access the cluster copy data to/from the cluster create parallel software compile code and use optimized libraries how to run the software on the full cluster tl;dr: - submit a job to the scheduler
3 What is a job?
4 What is a job scheduler?
5 Job scheduler/resource manager : Piece of software which: manages and allocates resources; manages and schedules jobs; Two computers are available for 10h You go, then you go. You wait. and sets up the environment for parallel and distributed computing
6 Resources: CPU cores Memory Disk space Network Accelerators Software Licenses
7
8
9
10 Slurm Free and open-source Mature Very active community Many success stories Runs 50% of TOP10 systems, including 1st Also an intergalactic soft drink
11 Other job schedulers PBSpro Torque/Maui Oracle (ex Sun) Grid Engine Condor...
12 You will learn how to: Create a job Monitor the jobs Control your own job Get job accounting info with
13 1. Make up your mind e.g. 1 core, 2GB RAM for 1 hour Job parameters resources you need; operations you need to perform. e.g. launch 'myprog' Job steps
14 2. Write a submission script It is a shell script (Bash) Bash sees these as comments Regular Bash comment Slurm takes them as commands Job step creation Regular Bash commands
15 Other useful parameters You want You ask To set a job name --job-name=myjobname To attach a comment to the job --comment= Some comment To get s -- -type= BEGIN END FAILED -- -user=my@mail.com To set the name of the ouptut file --output=result-%j.txt --error=error-%j.txt To delay the start of your job --begin=16:00 --begin=now+1hour --begin= t12:34:00 To specify an ordering of your jobs --dependency=after(ok notok any):jobids --dependency=singleton To control failure options --nokill --norequeue --requeue
16 Constraints and resources You want You ask To choose a specific feature (e.g. a processor --constraint type or a NIC type) To use a specific resources (e.g. a gpu) --gres To reserve a whole node for yourself --exclusive To chose a partition --partition
17 3. Submit the script I submit with 'sbatch' Slurm gives me the JobID One more job parameter
18 So you can play Download with wget and untar it on hmem compile the 'stress' program you can use it to burn cputime and memory:./stress --cpu 1 --vm-bytes 128M --timeout 30s Write a job script Submit a job See it running Cancel it Get it killed
19 4. Monitor your job squeue sprio sstat sview
20 4. Monitor your job squeue sprio sstat sview
21 4. Monitor your job squeue sprio sstat sview
22 4. Monitor your job squeue sprio sstat sview
23 A word about backfill The rule: a job with a lower priority can start before a job with a higher priority if it does not delay that job's start time. resources job 100 job's priority time Low priority job has short max run time and less requirements ; it starts before larger priority job
24
25 4. Monitor your job squeue sprio sstat sview
26 4. Monitor your job squeue sprio sstat sview
27 4. Monitor your job squeue sprio sstat sview
28 5. Control your job scancel scontrol sview
29 5. Control your job scancel scontrol sview
30 5. Control your job scancel scontrol sview
31 5. Control your job scancel scontrol sview
32 5. Control your job scancel scontrol sview
33 6. Job accounting sacct sreport sshare
34 6. Job accounting sacct sreport sshare
35 6. Job accounting sacct sreport sshare
36 6. Job accounting sacct sreport sshare
37 6. Job accounting sacct sreport sshare
38 6. Job accounting sacct sreport sshare
39 The rules of fairshare A share is allocated to you: 1/nbusers If your actual usage is above that share, your fairshare value is decreased towards 0. If your actual usage is below that share, your fairshare value is increased towards 1. The actual usage taken into account decreases over time
40 A word about fairshare
41 A word about fairshare Assume 3 users, 3-cores cluster Red uses 1 core for a certain period of time Blue uses 2 cores for half that period Red uses 2 cores afterwards #nodes time
42 A word about fairshare Assume 3 users, 3-cores cluster Red uses 1 core for a certain period of time Blue uses 2 cores for half that period Red uses 2 cores afterwards
43 A word about fairshare
44
45 Getting cluster info sinfo sjstat
46 Getting cluster info sinfo sjstat
47 Interactive work salloc salloc -ntasks=4 --nodes=2
48 Interactive work salloc salloc -ntasks=4 --nodes=2
49 Summary Explore the enviroment Get node features (sinfo --node --long) Get node usage (sinfo --summarize) Submit a job: Define the resources you need Determine what the job should do Submit the job script (sbatch) View the job status (squeue) Get accounting information (sacct) job script
50
51
52 You will learn how to: Create a parallel job Request distributed resources with
53 Concurrent - Parallel - Distributed Master/slave vs SPMD Synchronous vs asynchronous Message passing vs shared memory
54 Typical resource request You want You ask 16 independent processes (no communication) --ntasks=16 MPI and do not care about where cores are distributed --ntasks=16 cores spread across distinct nodes --ntasks=16 --nodes=16 cores spread across distinct nodes and nobody else around --ntasks=16 --nodes=16 --exclusive 16 processes to spread across 8 nodes --ntasks=16 --ntasks-per-node=2 16 processes on the same node --ntasks=16 --ntasks-per-node=16 one process multithreading that can use 16 cores for --ntasks=1 --cpus-per-task=16 4 processes that can use 4 cores --ntasks=4 --cpus-per-task=4 more constraint requests --distribution=block cyclic arbitrary
55 Use case 1: Random sampling Your program draws random numbers and processes them sequentially Parallelism is obtained by launching the same program multiple times simultaneously Every process does the same thing No inter process communication Results appended to one common file
56 Use case 1: Random sampling You want You ask 16 independent processes (no communication) --ntasks=16 You use srun./myprog
57 Use case 1: Random sampling You want You ask 16 independent processes (no communication) --array= output=res%a You merge with cat res*
58 Use case 2: Multiple datafiles Your program processes data from one datafile Parallelism is obtained by launching the same program multiple times on distinct data files Everybody does the same thing on distinct data stored in different files No inter process communication Results appended to one common file
59 Use case 2: Multiple datafiles You want You ask 16 independent processes (no communication) --ntasks=16 You use srun./myprog $SLURM_PROCID
60 Use case 2: Multiple datafiles Useful commands: xargs and find/ls: Single node: ls data* xargs -n1 -P $SLURM_NPROCS myprog Multiple nodes: ls data* xargs -n1 -P $SLURM_NTASKS srun -c1 myprog Safer: find. -maxdepth1 -name data* -print0 xargs -0 -n1 -P...
61 Use case 2: Multiple datafiles You want You ask 16 independent processes (no communication) --array=1-16 You use $=SLURM_TASK_ARRAY_ID
62 Use case 3: Parameter sweep Your program tests something for one particular value of a parameter Parallelism is obtained by launching the same program multiple times with an distinct identifier Everybody does the same thing except for a given parameter value based on the identifier No inter process communication Results appended to one common file
63 Use case 3: Parameter sweep You want You ask 16 independent processes (no communication) --ntasks=16 You use srun./myprog $SLURM_PROCID
64 Use case 3: Parameter sweep You want You ask 16 independent processes (no communication) --array= output=res%a You use $SLURM_ARRAY_TASK_ID cat res* to merge
65 Use case 3: Parameter sweep Useful command: GNU Parallel Single node: parallel -j $SLURM_NPROCS myprog ::: {1..5} ::: {A..D} Multiple nodes: parallel -j $SLURM_NTASKS srun -c1 myprog ::: {1..5} ::: {A..D} Useful: parallel --joblog runtask.log resume for checkpointing parallel echo data_{1}_{2}.dat ::: ::: 1 2 3
66 Use case 4: Multithread Your program uses OpenMP or TBB Parallelism is obtained by launching a multithreaded program One program spawns itself on the node Inter process communication by shared memory Results managed in the program which outputs a summary
67 Use case 4: Multithread You want one process multithreading You use that can use You ask 16 cores for --ntasks=1 --cpus-per-task=16 OMP_NUMTHREADS=16 srun myprog
68 Use case 5: Message passing Your program uses MPI Parallelism is obtained by launching a multi-process program One program spawns itself on several nodes Inter process communication by the network Results managed in the program which outputs a summary
69 Use case 5: Message passing You want You ask 16 processes for use with MPI --ntasks=16 You use module load openmpi mpirun myprog
70 Use case 6: Master/slave You have two types of programs: master and slave Parallelism is obtained by launching a several slaves, managed by the master The master launches several slaves on distinct nodes Inter process communication by the network or the disk Results managed in the master program which outputs a summary
71 Use case 6: Master slave You want You ask 16 processes 16 threads --ntasks=16 --cpus-per-task=16 You use --multi-prog + conf file
72 Use case 6: Master slave You want You ask 16 processes 16 threads --ntasks=16 --cpus-per-task=16 You use --multi-prog + conf file
73 Summary Choose number of processes: --ntasks Choose number of threads: --cpu-per-task Launch processes with srun or mpirun Set multithreading with OMP_NUM_THREADS You can use $SLURM_PROC_ID $SLURM_TASK_ARRAY_ID
74
75
76 Try Download MPI hello world on Wikipedia, compile it, write job script and submit it Rewrite 'Multiple files' examples using xargs Rewrite 'Parameter sweep' example using GNU parallel
General Overview. Slurm Training15. Alfred Gil & Jordi Blasco (HPCNow!)
Slurm Training15 Agenda 1 2 3 About Slurm Key Features of Slurm Extending Slurm Resource Management Daemons Job/step allocation 4 5 SMP MPI Parametric Job monitoring Accounting Scheduling Administration
More informationSLURM Workload Manager
SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux
More informationSLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.
SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!
More informationSLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.
SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!
More informationIntroduction to Running Computations on the High Performance Clusters at the Center for Computational Research
! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at
More informationAn introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu
An introduction to compute resources in Biostatistics Chris Scheller schelcj@umich.edu 1. Resources 1. Hardware 2. Account Allocation 3. Storage 4. Software 2. Usage 1. Environment Modules 2. Tools 3.
More informationThe Asterope compute cluster
The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU
More informationBiowulf2 Training Session
Biowulf2 Training Session 9 July 2015 Slides at: h,p://hpc.nih.gov/docs/b2training.pdf HPC@NIH website: h,p://hpc.nih.gov System hardware overview What s new/different The batch system & subminng jobs
More informationAn introduction to Fyrkat
Cluster Computing May 25, 2011 How to get an account https://fyrkat.grid.aau.dk/useraccount How to get help https://fyrkat.grid.aau.dk/wiki What is a Cluster Anyway It is NOT something that does any of
More informationSlurm Workload Manager Architecture, Configuration and Use
Slurm Workload Manager Architecture, Configuration and Use Morris Jette jette@schedmd.com Goals Learn the basics of SLURM's architecture, daemons and commands Learn how to use a basic set of commands Learn
More informationSubmitting batch jobs Slurm on ecgate. Xavi Abellan xavier.abellan@ecmwf.int User Support Section
Submitting batch jobs Slurm on ecgate Xavi Abellan xavier.abellan@ecmwf.int User Support Section Slide 1 Outline Interactive mode versus Batch mode Overview of the Slurm batch system on ecgate Batch basic
More informationMatlab on a Supercomputer
Matlab on a Supercomputer Shelley L. Knuth Research Computing April 9, 2015 Outline Description of Matlab and supercomputing Interactive Matlab jobs Non-interactive Matlab jobs Parallel Computing Slides
More informationBatch Usage on JURECA Introduction to Slurm. Nov 2015 Chrysovalantis Paschoulas c.paschoulas@fz-juelich.de
Batch Usage on JURECA Introduction to Slurm Nov 2015 Chrysovalantis Paschoulas c.paschoulas@fz-juelich.de Batch System Concepts (1) A cluster consists of a set of tightly connected "identical" computers
More informationTutorial-4a: Parallel (multi-cpu) Computing
HTTP://WWW.HEP.LU.SE/COURSES/MNXB01 Introduction to Programming and Computing for Scientists (2015 HT) Tutorial-4a: Parallel (multi-cpu) Computing Balazs Konya (Lund University) Programming for Scientists
More informationManaging GPUs by Slurm. Massimo Benini HPC Advisory Council Switzerland Conference March 31 - April 3, 2014 Lugano
Managing GPUs by Slurm Massimo Benini HPC Advisory Council Switzerland Conference March 31 - April 3, 2014 Lugano Agenda General Slurm introduction Slurm@CSCS Generic Resource Scheduling for GPUs Resource
More informationCHEOPS Cologne High Efficient Operating Platform for Science Brief Instructions
CHEOPS Cologne High Efficient Operating Platform for Science Brief Instructions (Version: 07.10.2013) Foto: Thomas Josek/JosekDesign Viktor Achter Dr. Stefan Borowski Lech Nieroda Dr. Lars Packschies Volker
More informationJob Scheduling Using SLURM
Job Scheduling Using SLURM Morris Jette jette@schedmd.com Goals Learn the basics of SLURM's architecture, daemons and commands Learn how to use a basic set of commands Learn how to build, configure and
More informationPSE Molekulardynamik
OpenMP, bigger Applications 12.12.2014 Outline Schedule Presentations: Worksheet 4 OpenMP Multicore Architectures Membrane, Crystallization Preparation: Worksheet 5 2 Schedule 10.10.2014 Intro 1 WS 24.10.2014
More informationBright Cluster Manager 5.2. User Manual. Revision: 3324. Date: Fri, 30 Nov 2012
Bright Cluster Manager 5.2 User Manual Revision: 3324 Date: Fri, 30 Nov 2012 Table of Contents Table of Contents........................... i 1 Introduction 1 1.1 What Is A Beowulf Cluster?..................
More informationIntroduction to parallel computing and UPPMAX
Introduction to parallel computing and UPPMAX Intro part of course in Parallel Image Analysis Elias Rudberg elias.rudberg@it.uu.se March 22, 2011 Parallel computing Parallel computing is becoming increasingly
More informationIntroduction to Supercomputing with Janus
Introduction to Supercomputing with Janus Shelley Knuth shelley.knuth@colorado.edu Peter Ruprecht peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Who is CU Research Computing? What is a supercomputer?
More informationGC3: Grid Computing Competence Center Cluster computing, I Batch-queueing systems
GC3: Grid Computing Competence Center Cluster computing, I Batch-queueing systems Riccardo Murri, Sergio Maffioletti Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich
More informationTo connect to the cluster, simply use a SSH or SFTP client to connect to:
RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, cluster-head.ce.rit.edu, serves as the master controller or
More informationHow to Run Parallel Jobs Efficiently
How to Run Parallel Jobs Efficiently Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education May 9, 2013 1 The big picture: running parallel jobs on Hoffman2
More informationSLURM Resources isolation through cgroups. Yiannis Georgiou email: yiannis.georgiou@bull.fr Matthieu Hautreux email: matthieu.hautreux@cea.
SLURM Resources isolation through cgroups Yiannis Georgiou email: yiannis.georgiou@bull.fr Matthieu Hautreux email: matthieu.hautreux@cea.fr Outline Introduction to cgroups Cgroups implementation upon
More informationHPC Wales Skills Academy Course Catalogue 2015
HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses
More informationA High Performance Computing Scheduling and Resource Management Primer
LLNL-TR-652476 A High Performance Computing Scheduling and Resource Management Primer D. H. Ahn, J. E. Garlick, M. A. Grondona, D. A. Lipari, R. R. Springmeyer March 31, 2014 Disclaimer This document was
More informationIntroduction to Sun Grid Engine (SGE)
Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems
More informationLinux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.
Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction
More informationHodor and Bran - Job Scheduling and PBS Scripts
Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.
More informationStreamline Computing Linux Cluster User Training. ( Nottingham University)
1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running
More informationWorking with HPC and HTC Apps. Abhinav Thota Research Technologies Indiana University
Working with HPC and HTC Apps Abhinav Thota Research Technologies Indiana University Outline What are HPC apps? Working with typical HPC apps Compilers - Optimizations and libraries Installation Modules
More informationOverview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
More informationGRID Computing: CAS Style
CS4CC3 Advanced Operating Systems Architectures Laboratory 7 GRID Computing: CAS Style campus trunk C.I.S. router "birkhoff" server The CAS Grid Computer 100BT ethernet node 1 "gigabyte" Ethernet switch
More informationIntroduction to Linux and Cluster Basics for the CCR General Computing Cluster
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959
More informationBatch Scheduling and Resource Management
Batch Scheduling and Resource Management Luke Tierney Department of Statistics & Actuarial Science University of Iowa October 18, 2007 Luke Tierney (U. of Iowa) Batch Scheduling and Resource Management
More informationParallel Computing using MATLAB Distributed Compute Server ZORRO HPC
Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing
More informationOptimizing Shared Resource Contention in HPC Clusters
Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs
More informationRequesting Nodes, Processors, and Tasks in Moab
LLNL-MI-401783 LAWRENCE LIVERMORE NATIONAL LABORATORY Requesting Nodes, Processors, and Tasks in Moab D.A Lipari March 29, 2012 This document was prepared as an account of work sponsored by an agency of
More informationGrid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)
Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing
More informationGrid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu
Grid 101 Josh Hegie grid@unr.edu http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging
More informationThe Top Six Advantages of CUDA-Ready Clusters. Ian Lumb Bright Evangelist
The Top Six Advantages of CUDA-Ready Clusters Ian Lumb Bright Evangelist GTC Express Webinar January 21, 2015 We scientists are time-constrained, said Dr. Yamanaka. Our priority is our research, not managing
More informationDavid Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems
David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems About me David Rioja Redondo Telecommunication Engineer - Universidad de Alcalá >2 years building and managing clusters UPM
More informationUsing WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014
Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan
More informationBLM 413E - Parallel Programming Lecture 3
BLM 413E - Parallel Programming Lecture 3 FSMVU Bilgisayar Mühendisliği Öğr. Gör. Musa AYDIN 14.10.2015 2015-2016 M.A. 1 Parallel Programming Models Parallel Programming Models Overview There are several
More informationIntroduction to the SGE/OGS batch-queuing system
Grid Computing Competence Center Introduction to the SGE/OGS batch-queuing system Riccardo Murri Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Oct. 6, 2011 The basic
More informationR and High-Performance Computing
R and High-Performance Computing A (Somewhat Brief and Personal) Overview Dirk Eddelbuettel ISM HPCCON 2015 & ISM HPC on R Workshop The Institute of Statistical Mathematics, Tokyo, Japan October 9-12,
More informationMSU Tier 3 Usage and Troubleshooting. James Koll
MSU Tier 3 Usage and Troubleshooting James Koll Overview Dedicated computing for MSU ATLAS members Flexible user environment ~500 job slots of various configurations ~150 TB disk space 2 Condor commands
More informationProvisioning and Resource Management at Large Scale (Kadeploy and OAR)
Provisioning and Resource Management at Large Scale (Kadeploy and OAR) Olivier Richard Laboratoire d Informatique de Grenoble (LIG) Projet INRIA Mescal 31 octobre 2007 Olivier Richard ( Laboratoire d Informatique
More informationThe RWTH Compute Cluster Environment
The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de
More informationAn Introduction to High Performance Computing in the Department
An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software
More informationNEC HPC-Linux-Cluster
NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores
More informationHPC-Nutzer Informationsaustausch. The Workload Management System LSF
HPC-Nutzer Informationsaustausch The Workload Management System LSF Content Cluster facts Job submission esub messages Scheduling strategies Tools and security Future plans 2 von 10 Some facts about the
More information1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology
Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science
More informationGrid Engine Training Introduction
Grid Engine Training Jordi Blasco (jordi.blasco@xrqtc.org) 26-03-2012 Agenda 1 How it works? 2 History Current status future About the Grid Engine version of this training Documentation 3 Grid Engine internals
More informationParallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises
Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Pierre-Yves Taunay Research Computing and Cyberinfrastructure 224A Computer Building The Pennsylvania State University University
More informationSimulation of batch scheduling using real production-ready software tools
Simulation of batch scheduling using real production-ready software tools Alejandro Lucero 1 Barcelona SuperComputing Center Abstract. Batch scheduling for high performance cluster installations has two
More informationTackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc.
Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc. 2015 The MathWorks, Inc. 1 Challenges of Big Data Any collection of data sets so large and complex that it becomes difficult
More informationHigh Performance Computing
High Performance Computing at Stellenbosch University Gerhard Venter Outline 1 Background 2 Clusters 3 SU History 4 SU Cluster 5 Using the Cluster 6 Examples What is High Performance Computing? Wikipedia
More informationOpenMP & MPI CISC 879. Tristan Vanderbruggen & John Cavazos Dept of Computer & Information Sciences University of Delaware
OpenMP & MPI CISC 879 Tristan Vanderbruggen & John Cavazos Dept of Computer & Information Sciences University of Delaware 1 Lecture Overview Introduction OpenMP MPI Model Language extension: directives-based
More informationParallel Algorithm Engineering
Parallel Algorithm Engineering Kenneth S. Bøgh PhD Fellow Based on slides by Darius Sidlauskas Outline Background Current multicore architectures UMA vs NUMA The openmp framework Examples Software crisis
More informationChapter 2: Getting Started
Chapter 2: Getting Started Once Partek Flow is installed, Chapter 2 will take the user to the next stage and describes the user interface and, of note, defines a number of terms required to understand
More informationInstalling and running COMSOL on a Linux cluster
Installing and running COMSOL on a Linux cluster Introduction This quick guide explains how to install and operate COMSOL Multiphysics 5.0 on a Linux cluster. It is a complement to the COMSOL Installation
More informationUsing the Windows Cluster
Using the Windows Cluster Christian Terboven terboven@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster
More informationOpenMP Programming on ScaleMP
OpenMP Programming on ScaleMP Dirk Schmidl schmidl@rz.rwth-aachen.de Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign
More informationHow To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
More informationDebugging with TotalView
Tim Cramer 17.03.2015 IT Center der RWTH Aachen University Why to use a Debugger? If your program goes haywire, you may... ( wand (... buy a magic... read the source code again and again and...... enrich
More informationCluster@WU User s Manual
Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut
More informationBatch Systems. provide a mechanism for submitting, launching, and tracking jobs on a shared resource
PBS INTERNALS PBS & TORQUE PBS (Portable Batch System)-software system for managing system resources on workstations, SMP systems, MPPs and vector computers. It was based on Network Queuing System (NQS)
More informationGrid Engine experience in Finis Terrae, large Itanium cluster supercomputer. Pablo Rey Mayo Systems Technician, Galicia Supercomputing Centre (CESGA)
Grid Engine experience in Finis Terrae, large Itanium cluster supercomputer Pablo Rey Mayo Systems Technician, Galicia Supercomputing Centre (CESGA) Agenda Introducing CESGA Finis Terrae Architecture Grid
More informationRunning a Workflow on a PowerCenter Grid
Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)
More informationSGE Roll: Users Guide. Version @VERSION@ Edition
SGE Roll: Users Guide Version @VERSION@ Edition SGE Roll: Users Guide : Version @VERSION@ Edition Published Aug 2006 Copyright 2006 UC Regents, Scalable Systems Table of Contents Preface...i 1. Requirements...1
More informationMOSIX: High performance Linux farm
MOSIX: High performance Linux farm Paolo Mastroserio [mastroserio@na.infn.it] Francesco Maria Taurino [taurino@na.infn.it] Gennaro Tortone [tortone@na.infn.it] Napoli Index overview on Linux farm farm
More informationGetting Started with HPC
Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage
More informationBig Data Evaluator 2.1: User Guide
University of A Coruña Computer Architecture Group Big Data Evaluator 2.1: User Guide Authors: Jorge Veiga, Roberto R. Expósito, Guillermo L. Taboada and Juan Touriño May 5, 2016 Contents 1 Overview 3
More informationRunning applications on the Cray XC30 4/12/2015
Running applications on the Cray XC30 4/12/2015 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch jobs on compute nodes
More informationMicrosoft HPC. V 1.0 José M. Cámara (checam@ubu.es)
Microsoft HPC V 1.0 José M. Cámara (checam@ubu.es) Introduction Microsoft High Performance Computing Package addresses computing power from a rather different approach. It is mainly focused on commodity
More informationMartinos Center Compute Clusters
Intro What are the compute clusters How to gain access Housekeeping Usage Log In Submitting Jobs Queues Request CPUs/vmem Email Status I/O Interactive Dependencies Daisy Chain Wrapper Script In Progress
More informationParallel Debugging with DDT
Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece
More informationBringing Big Data Modelling into the Hands of Domain Experts
Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks david.willingham@mathworks.com.au 2015 The MathWorks, Inc. 1 Data is the sword of the
More informationGrid Engine Users Guide. 2011.11p1 Edition
Grid Engine Users Guide 2011.11p1 Edition Grid Engine Users Guide : 2011.11p1 Edition Published Nov 01 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the
More informationIntroduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz)
Center for e-research (eresearch@nesi.org.nz) Outline 1 About Us About CER and NeSI The CS Team Our Facilities 2 Key Concepts What is a Cluster Parallel Programming Shared Memory Distributed Memory 3 Using
More informationMicrosoft SQL Server OLTP Best Practice
Microsoft SQL Server OLTP Best Practice The document Introduction to Transactional (OLTP) Load Testing for all Databases provides a general overview on the HammerDB OLTP workload and the document Microsoft
More informationWITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE
WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE 1 W W W. F U S I ON I O.COM Table of Contents Table of Contents... 2 Executive Summary... 3 Introduction: In-Memory Meets iomemory... 4 What
More informationParallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
More informationIRS: Implicit Radiation Solver Version 1.0 Benchmark Runs
IRS: Implicit Radiation Solver Version 1.0 Benchmark Runs This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
More informationUsing Google Compute Engine
Using Google Compute Engine Chris Paciorek January 30, 2014 WARNING: This document is now out-of-date (January 2014) as Google has updated various aspects of Google Compute Engine. But it may still be
More informationKiko> A personal job scheduler
Kiko> A personal job scheduler V1.2 Carlos allende prieto october 2009 kiko> is a light-weight tool to manage non-interactive tasks on personal computers. It can improve your system s throughput significantly
More informationAdministering batch environments
Administering batch environments, Version 8.5 Administering batch environments SA32-1093-00 Note Before using this information, be sure to read the general information under Notices on page 261. Compilation
More informationHPCC - Hrothgar Getting Started User Guide MPI Programming
HPCC - Hrothgar Getting Started User Guide MPI Programming High Performance Computing Center Texas Tech University HPCC - Hrothgar 2 Table of Contents 1. Introduction... 3 2. Setting up the environment...
More informationSMock A Test Platform for the Evaluation of Monitoring Tools
SMock A Test Platform for the Evaluation of Monitoring Tools User Manual Ruth Mizzi Faculty of ICT University of Malta June 20, 2013 Contents 1 Introduction 3 1.1 The Architecture and Design of SMock................
More informationBenchmark Report: Univa Grid Engine, Nextflow, and Docker for running Genomic Analysis Workflows
PRBB / Ferran Mateo Benchmark Report: Univa Grid Engine, Nextflow, and Docker for running Genomic Analysis Workflows Summary of testing by the Centre for Genomic Regulation (CRG) utilizing new virtualization
More informationThe CNMS Computer Cluster
The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the
More informationScalable Data Analysis in R. Lee E. Edlefsen Chief Scientist UserR! 2011
Scalable Data Analysis in R Lee E. Edlefsen Chief Scientist UserR! 2011 1 Introduction Our ability to collect and store data has rapidly been outpacing our ability to analyze it We need scalable data analysis
More informationGPUs for Scientific Computing
GPUs for Scientific Computing p. 1/16 GPUs for Scientific Computing Mike Giles mike.giles@maths.ox.ac.uk Oxford-Man Institute of Quantitative Finance Oxford University Mathematical Institute Oxford e-research
More informationManual for using Super Computing Resources
Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad
More informationMultiprocessor Scheduling and Scheduling in Linux Kernel 2.6
Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Winter Term 2008 / 2009 Jun.-Prof. Dr. André Brinkmann Andre.Brinkmann@uni-paderborn.de Universität Paderborn PC² Agenda Multiprocessor and
More informationZooKeeper Administrator's Guide
A Guide to Deployment and Administration by Table of contents 1 Deployment...2 1.1 System Requirements... 2 1.2 Clustered (Multi-Server) Setup... 2 1.3 Single Server and Developer Setup...4 2 Administration...
More informationOLCF Best Practices. Bill Renaud OLCF User Assistance Group
OLCF Best Practices Bill Renaud OLCF User Assistance Group Overview This presentation covers some helpful information for users of OLCF Staying informed Some aspects of system usage that may differ from
More information