Introductory Tutorial to Parallel and Distributed Computing Tools of cluster.tigem.it

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Introductory Tutorial to Parallel and Distributed Computing Tools of cluster.tigem.it"

Transcription

1 Introductory Tutorial to Parallel and Distributed Computing Tools of cluster.tigem.it

2 A Computer Cluster is a group of networked computers, working together closely The computer are called nodes Cluster

3 Cluster node Each cluster node contains one ore more CPUs, memory, disks, network interfaces, graphic adapter, Like your desktop computer Can execute programs without tying up your workstation

4 front-end node where users log in and interact with the system computing nodes execute users programs Terminology (cluster)

5 Users home directories The users home directories are hosted on the front-end that shares them with the computing nodes

6 cluster.tigem.it nodes specifications 28 x Nodes Dell Server PowerEdge 1750 CPU: 2 X Intel Xeon CPU 3.06GHz Memory: 2GB (node 18 4GB, front-end 8GB) Disk: 147GB OS: LINUX Distribution: CentOS ~ Redhat Enterprise

7 cluster.tigem.it specifications One cluster CPU: 56 X Intel Xeon CPU 3.06GHz Memory: 64GB Disk: 8700GB OS: LINUX Distribution: Rocks Cluster

8 Access to the cluster Login ssh/putty text based (faster) vnc for graphical (fast) X11 for graphical (slow) File Transfer scp text based Winscp/Cyberduck graphical

9 Problem How can we manage multi-users access to the cluster nodes? Users agreement? Assign subset of nodes to each user? Not feasible Not convenient We can use a resource management system

10 Terminology (resource management systems) Batch or batch processing, is the capability of running jobs outside of the interactive login session Job or batch job is the basic execution object managed by the batch subsystem A job is a collection of related processes which is managed as a whole. A job can often be thought of as a shell script

11 Terminology (resource management systems) Queue is an ordered collection of jobs within the batch queuing system Each queue has a set of associated attributes which determine what actions are performed upon each job within the queue Typical attributes include queue name, queue priority, resource limits, destination(s ) and job count limits Selection/scheduling of jobs depend by a central job manager

12 Using a resource manager system To execute our programs on the cluster we need to prepare a job Write a non-interactive shell script and enqueue it in the system Non interactive: input output and error streams are files

13 First simple script Sleep 10 seconds and print the hostname ~]$ cat hostname.sh #!/bin/sh sleep 10 /bin/hostname

14 First simple submission Submit the job to the serial queue ~]$ qsub -q serial hostname.sh 1447.cluster.tigem.it The output of the qsub comand is JobID (Job IDentifier) a unique value that identify your job inside the system

15 First simple status query Look at the job status with qstat ~]$ qstat Job id Name User Time Use S Queue cluster.tigem.it hostname.sh oliva 0 R serial qstat display jobs status sorted by JobID R means Running Q means Queued (man qstat for other values)

16 Job Completition When our simple job is completed we can find two files in our directory ~]$ ls hostname.sh.* hostname.sh.e1447 hostname.sh.o1447 The ${JobName}.e${JobID} contain the job standard error stream while ${JobName}.o${JobID} contains the job standard output Look inside them with cat!

17 Status of the queues The qstat command can also be used to check the queue status ~]$ qstat -q

18 Cancelling Jobs To cancel a job that is running or queued you must use the qdel command qdel accepts the JobID as argument ~]$ qdel 1448.cluster.tigem.it

19 Interactive Jobs qsub allows you to execute interactive jobs by using the -I option If your program is controlled by a grafical user interface you can also export the display with the -X option (like ssh) To run matlab on a dedicated node: ~]$ qsub -X -I -q serial qsub: waiting for job 1449.cluster.tigem.it to start qsub: job 1449.cluster.tigem.it ready ~]$ /share/apps/...

20 Interactive Jobs The use of Graphical User Interfaces on cluser nodes is HIGHLY DISCOURAGED!!!! You'd better use matlab from the terminal ~]$ /share/apps/matlab/bin/matlab -nodisplay Matlab>

21 Exclusive use of a cluster Node Every node our cluster is equipped with 2 CPUs therefore the job manager allocate 2 jobs on each node Torque allows you to use a node exclusively and ensure that only our job is executed on that node by specifying the option -W x="naccesspolicy:singlejob"

22 Batch Matlab Jobs To run your matlab program in a non iteractive batch job you need to invoke matlab with the -nodesktop option and redirect its standard input from the.m file ~]$ cat matlab.sh #!/bin/sh /usr/local/bin/matlab -nodesktop < /data/user/run1.m

23 Batch R Jobs To run your R program in a non iteractive batch job you need to invoke R with the CMD BATCH arguments, the name of the file containing the R code to be executed, options and the name of the output file ~]$ cat R.sh #!/bin/sh /usr/bin/r CMD BATCH script.r script.rout Syntax: R CMD BATCH [options] infile [outfile]

24 Job Array To submit large numbers of jobs based on the same job script, rather than repeatedly call qsub Allow the creation of multiple jobs with one qsub command New job naming convention that allows users to reference the entire set of jobs as a unit, or to reference one particular job from the set

25 Job Array To submit a job array use the -t option with a range of integers that can be combined in a comma separated list: Examples : -t or -t 1,10, ~]$ qsub -t q serial hostname.sh 1450.cluster.tigem.it Job id Name User Time Use S Queue cluster hostname.sh-1 oliva 0 Q default cluster hostname.sh-2 oliva 0 Q default ArrayID

26 PBS_ARRAYID Each job in a job array gets a unique ArrayID Use the ArrayID value in your script through the PBS_ARRAYID environment variable Example: Suppose you have 1000 jpg images named image-1.jpg image-2.jpg... and want to convert them in the png format: ~]$ cat image-processing.sh #!/bin/bash convert image-$pbs_arrayid.jpg image-$pbs_arrayid.png ~]$ qsub -t image-processing.sh

27 Matlab Parallel Computing Toolbox

28 Matlab PCT Architecture Parallel Computing Toolbox (PCT) allows you to offload work from one MATLAB session (the client) to other MATLAB sessions, called workers. Matlab Client Matlab Workers

29 Matlab PCT You can use multiple workers to take advantage of parallel processing You can use a worker to keep your MATLAB client session free for interactive work MATLAB Distributed Computing Server software allows you to run up to 54 workers on cluster.tigem.it

30 Matlab PCT use cases Parallel for-loops (parfor) Large Data Sets SPMD Pmode

31 Repetitive iterations Many applications involve multiple segments of repetitive code (for-loops) Parameter sweep applications: Many iterations A sweep might take a long time because it comprises many iterations. Each iteration by itself might not take long to execute, but to complete thousands or millions of iterations in serial could take a long time Long iterations A sweep might not have a lot of iterations, but each iteration could take a long time to run

32 parfor A parfor-loop do the same job as the standard MATLAB for-loop: executes a series of statements (the loop body) over a range of values Part of the parfor body is executed on the MATLAB client (where the parfor is issued) and part is executed in parallel on MATLAB workers Data is sent from the client to workers and the results are sent back to the client and pieced

33 Parfor execution Steps for and parfor code comparison for i=1:1024 A(i) = sin(i*2*pi/1024); end plot(a) matlabpool open local 3 parfor i=1:1024 A(i) = sin(i*2*pi/1024); end plot(a) matlabpool close To interactively run code that contains a parallel loop 1 open a MATLAB pool to reserve a collection of MATLAB workers

34 Parfor limitations You cannot use a parfor-loop when an iteration in your loop depends on the results of other iterations Each iteration must be independent of all others Since there is a communications cost involved in a parfor-loop, there might be no advantage to using one when you have only a small number of simple calculations

35 Single Program Multiple Data The single program multiple data (spmd) language construct allows the subsequent use of serial and parallel programming The spmd statement lets you define a block of code to run simultaneously on multiple workers (called Labs)

36 SPMD example This code create the same identity matrix of random size on all the Labs Selects the same random row on each Lab Select a different random row on each Lab matlabpool 4 i=randi(10,1) spmd R = eye(i); end j=randi(i,1) spmd R(j,:); k=randi(i,1) R(k,:); end

37 Labindex variable The Labs used for an spmd statement each have a unique value for labindex This lets you specify code to be run on only certain labs, or to customize execution, usually for the purpose of accessing unique data. spmd labdata = load(['datafile_' num2str(labindex) '.ascii']) result = MyFunction(labdata) end

38 Distributed Arrays You can create a distributed array in the MATLAB client, and its data is stored on the Labs of the open MATLAB pool A distributed array is distributed in one dimension, along the last nonsingleton dimension, and as evenly as possible along that dimension among the labs You cannot control the details of distribution when creating a distributed array

39 Distributed Arrays Example This code distribute the identity matrix among the Labs Multiply the row by labindex Reassemble the resulting distributed matrix T on the client W = eyes(4); W = distributed(w); spmd T = labindex*w; end T

40 Codistributed Arrays You can create a codistributed inside the Labs When creating a codistributed array, you can control all aspects of distribution, including dimensions and partitions

41 Codistributed VS Distributed Codistributed arrays are partitioned among the labs from which you execute code to create or manipulate them Distributed arrays are partitioned among labs from the client with the open MATLAB pool Both can be accessed and used in the client code almost like regular arrays

42 Create a Codistributed Array Using MATLAB Constructor Function like rand or zeros with the a codistributor object argument Partitioning a Larger Array that is replicated on all labs, and partition it so that the pieces are distributed across the labs Building from Smaller Arrays stored on each lab, and combine them so that each array becomes a segment of a larger codistributed array

43 Constructors Valid constructors are: cell, colon, eye, false, Inf, NaN, ones, rand, randn, sparse, speye, sprand, sprandn, true, zeros Check their syntax with: help codistributed.constructor Create a codistributed random matrix of size 100 with spmd T = codistributed.rand(100) end

44 Partitioning a Larger Array When you have sufficient memory to store the initial replicated array you can use the codistributed function to partition a large array among Labs spmd A = [11:18; 21:28; 31:38; 41:48]; D = codistributed(a); getlocalpart(d) end

45 Building from Smaller Arrays To save on memory, you can construct the smaller pieces (local part) on each lab first, and then combine them into a single array that is distributed across the labs matlabpool 3 spmd A = (labindex-1) * 10 + [ 1:5 ; 6:10 ]; R = codistributed.build(a, codistributor1d(1,[2 2 2],[6 5])) getlocalpart(r) C = codistributed.build(a, codistributor1d(2,[5 5 5],[2 15])) getlocalpart(c) end...

46 Codistributor1d Describes the distribution scheme Matrix Codistributed by 1 st dimension codistributor1d(1,[2 2 2],[6 5]) Pick 2 rows in the first lab, 2 in the second and 2 in the third Obtain a 6x5 codistributed matrix

47 Codistributor1d Describes the distribution scheme Matrix Codistributed by 2 nd dimension (columns) codistributor1d(2,[5 5 5],[2 15]) Pick 5 columns in the first lab, 5 in the second and 5 in the third Obtain a 2x15 codistributed matrix

48 pmode Like spmd, pmode lets you work interactively with a parallel job running simultaneously on several Labs Commands you type at the pmode prompt in the Parallel Command Window are executed on all labs at the same time In contrast to spmd, pmode provides a desktop with a display for each lab running the job, where you can enter commands, see results, access each lab s workspace, etc

49 pmode Pmode gives a separated view of the situation on the various Labs

50 dfeval The dfeval function allows you to evaluate a function in a cluster of workers You need to provide basic required information, such as the function to be evaluated, the number of tasks to divide the job into, and the variable into which the results are returned results = {[1 1] [2 2] [3 3]}, 'Configuration', 'cluster.tigem.it')

51 dfeval Suppose the function myfun accepts three input arguments, and generates two output arguments The number of elements of the input argument cell arrays determines the number of tasks in the job [X, Y] = All input cell arrays must have the same {a1 a2 a3 a4}, {b1 b2 b3 b4}, {c1 c2 c3 c4}, number 'configuration','cluster.tigem.it, 'FileDependencies',myfun.m); of elements In this example, there are four tasks

52 dfeval results Results are stored this way X{1}, Y{1} myfun(a1, b1, c1) X{2}, Y{2} myfun(a2, b2, c2) X{3}, Y{3} myfun(a3, b3, c3) X{4}, Y{4} myfun(a4, b4, c4) Like you would have executed [X{1}, Y{1}] = myfun(a1, b1, c1); [X{2}, Y{2}] = myfun(a2, b2, c2); [X{3}, Y{3}] = myfun(a3, b3, c3); [X{4}, Y{4}] = myfun(a4, b4, c4);

53 Using torque inside Matlab Find a Job Manager Create a Job Create Tasks Submit a Job to the Job Queue Retrieve the Job s Results

54 Find a Job Manager Use findresource to load the configuration jm = findresource('scheduler','configuration','cluster.tigem.it') jm = PBS Scheduler Information ========================= Type : Torque ClusterSize : 27 DataLocation : /home/oliva HasSharedFilesystem : true - Assigned Jobs... Number Pending : 1 Number Queued : 0 Number Running : 0 Number Finished : 1

55 Create Job Use CreateJob to create a Matlab Job object that correspond to a Torque job job1=createjob(jobm) Job ID 10 Information ===================== - Data Dependencies UserName : oliva State : pending SubmitTime : StartTime : Running Duration : FileDependencies : {} PathDependencies : {}

56 Create Tasks Add Tasks to the job using CreateTask 1, {3,3}); 1, {3,3}); 1, {3,3}); 1, {3,3}); 1, {3,3});

57 Submit a Job to the Job Queue Submit your Job submit(job1) Retrieve it's output results = getalloutputarguments(job1); Delete Job's Data destroy(job1);

High-Performance Computing

High-Performance Computing High-Performance Computing Windows, Matlab and the HPC Dr. Leigh Brookshaw Dept. of Maths and Computing, USQ 1 The HPC Architecture 30 Sun boxes or nodes Each node has 2 x 2.4GHz AMD CPUs with 4 Cores

More information

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Miami University RedHawk Cluster Working with batch jobs on the Cluster Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

Introduction to Matlab Distributed Computing Server (MDCS) Dan Mazur and Pier-Luc St-Onge guillimin@calculquebec.ca December 1st, 2015

Introduction to Matlab Distributed Computing Server (MDCS) Dan Mazur and Pier-Luc St-Onge guillimin@calculquebec.ca December 1st, 2015 Introduction to Matlab Distributed Computing Server (MDCS) Dan Mazur and Pier-Luc St-Onge guillimin@calculquebec.ca December 1st, 2015 1 Partners and sponsors 2 Exercise 0: Login and Setup Ubuntu login:

More information

HPCC Hrothgar Getting Started User Guide Submitting and Running Jobs

HPCC Hrothgar Getting Started User Guide Submitting and Running Jobs HPCC Hrothgar Getting Started User Guide Submitting and Running Jobs High Performance Computing Center Texas Tech University HPCC Hrothgar 2 Table of Contents 1. Submitting and Running Jobs on HROTHGAR...

More information

Quick Tutorial for Portable Batch System (PBS)

Quick Tutorial for Portable Batch System (PBS) Quick Tutorial for Portable Batch System (PBS) The Portable Batch System (PBS) system is designed to manage the distribution of batch jobs and interactive sessions across the available nodes in the cluster.

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

Matlab on a Supercomputer

Matlab on a Supercomputer Matlab on a Supercomputer Shelley L. Knuth Research Computing April 9, 2015 Outline Description of Matlab and supercomputing Interactive Matlab jobs Non-interactive Matlab jobs Parallel Computing Slides

More information

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing

More information

Introduction to Sun Grid Engine (SGE)

Introduction to Sun Grid Engine (SGE) Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems

More information

Parallel Computing with MATLAB

Parallel Computing with MATLAB Parallel Computing with MATLAB Scott Benway Senior Account Manager Jiro Doke, Ph.D. Senior Application Engineer 2013 The MathWorks, Inc. 1 Acceleration Strategies Applied in MATLAB Approach Options Best

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

Getting started with the new Oscar cluster

Getting started with the new Oscar cluster Getting started with the new Oscar cluster Why a new Oscar? Improvements: -Upgrading the Linux operating system to CentOS 6.3 -Replacing the existing scheduling and batch system (Torque/Moab) with a new

More information

Standard queue status command supplied by PBS. See man qstat for details of options.

Standard queue status command supplied by PBS. See man qstat for details of options. PBS/ PBS Pro User Guide Most jobs will require greater resources than are available on individual nodes. All jobs must be scheduled via the batch job system. The batch job system in use is the PBS Pro

More information

Beyond Windows: Using the Linux Servers and the Grid

Beyond Windows: Using the Linux Servers and the Grid Beyond Windows: Using the Linux Servers and the Grid Topics Linux Overview How to Login & Remote Access Passwords Staying Up-To-Date Network Drives Server List The Grid Useful Commands Linux Overview Linux

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

PBS "Using PBS" Training Course. Customer Education

PBS Using PBS Training Course. Customer Education Customer Education "Using PBS" Class Syllabus Note: This syllabus is periodically updated/revised based on input from customer sites and training feedback. Actual course syllabus may differ slightly from

More information

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

Job Scheduling with Moab Cluster Suite

Job Scheduling with Moab Cluster Suite Job Scheduling with Moab Cluster Suite IBM High Performance Computing February 2010 Y. Joanna Wong, Ph.D. yjw@us.ibm.com 2/22/2010 Workload Manager Torque Source: Adaptive Computing 2 Some terminology..

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc.

Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc. Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc. 2015 The MathWorks, Inc. 1 Challenges of Big Data Any collection of data sets so large and complex that it becomes difficult

More information

Chapter 2: Getting Started

Chapter 2: Getting Started Chapter 2: Getting Started Once Partek Flow is installed, Chapter 2 will take the user to the next stage and describes the user interface and, of note, defines a number of terms required to understand

More information

Grid Engine Users Guide. 2011.11p1 Edition

Grid Engine Users Guide. 2011.11p1 Edition Grid Engine Users Guide 2011.11p1 Edition Grid Engine Users Guide : 2011.11p1 Edition Published Nov 01 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

Using StarCCM+ at the UB CCR

Using StarCCM+ at the UB CCR Using StarCCM+ at the UB CCR About This Document This document provides a step-by-step tutorial for using the StarCCM + graphical user interface (GUI) on the UB CCR cluster and viz node front-end. It also

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

Speeding up MATLAB Applications

Speeding up MATLAB Applications Speeding up MATLAB Applications Mannheim, 19. Februar 2014 Michael Glaßer Dipl.-Ing. Application Engineer 2014 The MathWorks, Inc. 1 Ihr MathWorks Team heute: Andreas Himmeldorf Senior Team Leader Educational

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

High-Performance Reservoir Risk Assessment (Jacta Cluster)

High-Performance Reservoir Risk Assessment (Jacta Cluster) High-Performance Reservoir Risk Assessment (Jacta Cluster) SKUA-GOCAD 2013.1 Paradigm 2011.3 With Epos 4.1 Data Management Configuration Guide 2008 2013 Paradigm Ltd. or its affiliates and subsidiaries.

More information

Bringing Big Data Modelling into the Hands of Domain Experts

Bringing Big Data Modelling into the Hands of Domain Experts Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks david.willingham@mathworks.com.au 2015 The MathWorks, Inc. 1 Data is the sword of the

More information

SGE Roll: Users Guide. Version @VERSION@ Edition

SGE Roll: Users Guide. Version @VERSION@ Edition SGE Roll: Users Guide Version @VERSION@ Edition SGE Roll: Users Guide : Version @VERSION@ Edition Published Aug 2006 Copyright 2006 UC Regents, Scalable Systems Table of Contents Preface...i 1. Requirements...1

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. September 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 25 Course

More information

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es)

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es) Microsoft HPC V 1.0 José M. Cámara (checam@ubu.es) Introduction Microsoft High Performance Computing Package addresses computing power from a rather different approach. It is mainly focused on commodity

More information

Introduction to Matlab

Introduction to Matlab Introduction to Matlab Social Science Research Lab American University, Washington, D.C. Web. www.american.edu/provost/ctrl/pclabs.cfm Tel. x3862 Email. SSRL@American.edu Course Objective This course provides

More information

Exercises for job submission with SLURM

Exercises for job submission with SLURM Exercises for job submission with SLURM Conventions Fonts This is the font that is used for general information throughout this guide. Text with a shaded background like this contains instructions which

More information

2011 European HyperWorks Technology Conference. Vladi Nosenzo, Roberto Vadori

2011 European HyperWorks Technology Conference. Vladi Nosenzo, Roberto Vadori 2011 European HyperWorks Technology Conference Vladi Nosenzo, Roberto Vadori 20 Novembre, 2010 2011 ABSTRACT The work described below starts from an idea of a previous experience of Reply, developed in

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

Martinos Center Compute Clusters

Martinos Center Compute Clusters Intro What are the compute clusters How to gain access Housekeeping Usage Log In Submitting Jobs Queues Request CPUs/vmem Email Status I/O Interactive Dependencies Daisy Chain Wrapper Script In Progress

More information

Integrating VoltDB with Hadoop

Integrating VoltDB with Hadoop The NewSQL database you ll never outgrow Integrating with Hadoop Hadoop is an open source framework for managing and manipulating massive volumes of data. is an database for handling high velocity data.

More information

Introduction to the SGE/OGS batch-queuing system

Introduction to the SGE/OGS batch-queuing system Grid Computing Competence Center Introduction to the SGE/OGS batch-queuing system Riccardo Murri Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Oct. 6, 2011 The basic

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

Batch Systems. provide a mechanism for submitting, launching, and tracking jobs on a shared resource

Batch Systems. provide a mechanism for submitting, launching, and tracking jobs on a shared resource PBS INTERNALS PBS & TORQUE PBS (Portable Batch System)-software system for managing system resources on workstations, SMP systems, MPPs and vector computers. It was based on Network Queuing System (NQS)

More information

Using Parallel Computing to Run Multiple Jobs

Using Parallel Computing to Run Multiple Jobs Beowulf Training Using Parallel Computing to Run Multiple Jobs Jeff Linderoth August 5, 2003 August 5, 2003 Beowulf Training Running Multiple Jobs Slide 1 Outline Introduction to Scheduling Software The

More information

Running applications on the Cray XC30 4/12/2015

Running applications on the Cray XC30 4/12/2015 Running applications on the Cray XC30 4/12/2015 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch jobs on compute nodes

More information

Calcul Parallèle sous MATLAB

Calcul Parallèle sous MATLAB Calcul Parallèle sous MATLAB Journée Calcul Parallèle GPU/CPU du PEPI MACS Olivier de Mouzon INRA Gremaq Toulouse School of Economics Lundi 28 novembre 2011 Paris Présentation en grande partie fondée sur

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

Accelerating CST MWS Performance with GPU and MPI Computing. CST workshop series

Accelerating CST MWS Performance with GPU and MPI Computing.  CST workshop series Accelerating CST MWS Performance with GPU and MPI Computing www.cst.com CST workshop series 2010 1 Hardware Based Acceleration Techniques - Overview - Multithreading GPU Computing Distributed Computing

More information

2/23/2010. Using Parallel Computing Resources at Marquette

2/23/2010. Using Parallel Computing Resources at Marquette Using Parallel Computing Resources at Marquette 1 HPC Resources Local Resources HPCL Cluster hpcl.mscs.mu.edu PARIO Cluster pario.eng.mu.edu PERE Cluster pere.marquette.edu MU Grid Regional Resources Milwaukee

More information

Using CARC Systems. Jim Prewett Fri. Jim Prewett Using CARC Systems Fri 1 / 23

Using CARC Systems. Jim Prewett Fri. Jim Prewett Using CARC Systems Fri 1 / 23 Using CARC Systems Jim Prewett 2014-03-28 Fri Jim Prewett Using CARC Systems 2014-03-28 Fri 1 / 23 Outline 1 Using CARC Systems GNU Environment Modules Submitting batch jobs Queues & Queue Limits Jim Prewett

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

SLURM Workload Manager

SLURM Workload Manager SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

Notes on Portable Batch System (PBS)

Notes on Portable Batch System (PBS) Introduction Notes on Portable Batch System (PBS) Amit Jain Department of Computer Science Boise State University The Portable Batch System (PBS) is a system that can be used to manage the usage of a cluster.

More information

On-demand (Pay-per-Use) HPC Service Portal

On-demand (Pay-per-Use) HPC Service Portal On-demand (Pay-per-Use) Portal Wang Junhong INTRODUCTION High Performance Computing, Computer Centre The Service Portal is a key component of the On-demand (pay-per-use) HPC service delivery. The Portal,

More information

The CSEM Multi-processor Computing Facility

The CSEM Multi-processor Computing Facility The CSEM Multi-processor Computing Facility J. Knap California Institute of Technology Hardware Configuration Front-end Machine name: alicante.aero.caltech.edu COMPAQ AlphaServer DS10 617 MHz 2GB memory

More information

GRID Computing: CAS Style

GRID Computing: CAS Style CS4CC3 Advanced Operating Systems Architectures Laboratory 7 GRID Computing: CAS Style campus trunk C.I.S. router "birkhoff" server The CAS Grid Computer 100BT ethernet node 1 "gigabyte" Ethernet switch

More information

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu Grid 101 Josh Hegie grid@unr.edu http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging

More information

Introduction to Grid Engine

Introduction to Grid Engine Introduction to Grid Engine Workbook Edition 8 January 2011 Document reference: 3609-2011 Introduction to Grid Engine for ECDF Users Workbook Introduction to Grid Engine for ECDF Users Author: Brian Fletcher,

More information

MATLAB Distributed Computing Server Cloud Center User s Guide

MATLAB Distributed Computing Server Cloud Center User s Guide MATLAB Distributed Computing Server Cloud Center User s Guide How to Contact MathWorks Latest news: Sales and services: User community: Technical support: www.mathworks.com www.mathworks.com/sales_and_services

More information

NYUAD HPC Center Running Jobs

NYUAD HPC Center Running Jobs NYUAD HPC Center Running Jobs 1 Overview... Error! Bookmark not defined. 1.1 General List... Error! Bookmark not defined. 1.2 Compilers... Error! Bookmark not defined. 2 Loading Software... Error! Bookmark

More information

CLUSTER ULYSSES (with MIC accelerators) Connect to Ulysses with ssh. Ulysses.mat.unimi.it. login: user password: xxxxxxx

CLUSTER ULYSSES (with MIC accelerators) Connect to Ulysses with ssh. Ulysses.mat.unimi.it. login: user password: xxxxxxx CLUSTER ULYSSES (with MIC accelerators) Connect to Ulysses with ssh ssh ulysses.mat.unimi.it login: user password: xxxxxxx Creating a SGE script to Ulysses.mat.unimi.it The process of submitting jobs to

More information

Maxwell compute cluster

Maxwell compute cluster Maxwell compute cluster An introduction to the Maxwell compute cluster Part 1 1.1 Opening PuTTY and getting the course materials on to Maxwell 1.1.1 On the desktop, double click on the shortcut icon for

More information

Cisco Networking Academy Program Curriculum Scope & Sequence. Fundamentals of UNIX version 2.0 (July, 2002)

Cisco Networking Academy Program Curriculum Scope & Sequence. Fundamentals of UNIX version 2.0 (July, 2002) Cisco Networking Academy Program Curriculum Scope & Sequence Fundamentals of UNIX version 2.0 (July, 2002) Course Description: Fundamentals of UNIX teaches you how to use the UNIX operating system and

More information

Production Environment

Production Environment Production Environment Introduction to Marconi HPC Cluster, for users and developers HPC User Support @ CINECA 27/06/2016 PBS Scheduler The production environment on Marconi is based on a batch system,

More information

Job scheduler details

Job scheduler details Job scheduler details Advanced Computing Center for Research & Education (ACCRE) Job scheduler details 1 / 25 Outline 1 Batch queue system overview 2 Torque and Moab 3 Submitting jobs (ACCRE) Job scheduler

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

wu.cloud: Insights Gained from Operating a Private Cloud System

wu.cloud: Insights Gained from Operating a Private Cloud System wu.cloud: Insights Gained from Operating a Private Cloud System Stefan Theußl, Institute for Statistics and Mathematics WU Wirtschaftsuniversität Wien March 23, 2011 1 / 14 Introduction In statistics we

More information

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35 HPCC USER S GUIDE Version 1.2 July 2012 IITS (Research Support) Singapore Management University IITS, Singapore Management University Page 1 of 35 Revision History Version 1.0 (27 June 2012): - Modified

More information

Using the MATLAB Parallel Computing Toolbox on the UB CCR cluster. L. Shawn Matott and Cynthia Cornelius September, 25, 2014

Using the MATLAB Parallel Computing Toolbox on the UB CCR cluster. L. Shawn Matott and Cynthia Cornelius September, 25, 2014 Using the MATLAB Parallel Computing Toolbox on the UB CCR cluster L. Shawn Matott and Cynthia Cornelius September, 25, 2014 Outline CCR s resources & how to access them Parallel MATLAB resources at CCR

More information

Installing and running COMSOL on a Linux cluster

Installing and running COMSOL on a Linux cluster Installing and running COMSOL on a Linux cluster Introduction This quick guide explains how to install and operate COMSOL Multiphysics 5.0 on a Linux cluster. It is a complement to the COMSOL Installation

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu An introduction to compute resources in Biostatistics Chris Scheller schelcj@umich.edu 1. Resources 1. Hardware 2. Account Allocation 3. Storage 4. Software 2. Usage 1. Environment Modules 2. Tools 3.

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

System Management for IRIX 6.5

System Management for IRIX 6.5 System Management for IRIX 6.5 MPI, Array Services, Miser, NQE, Checkpoint/Restart NQE Development Team Strategic Software Organization Silicon Graphics, Inc. ABSTRACT: MPI, Array Services, Miser, NQE,

More information

An Oracle White Paper November 2010. Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics

An Oracle White Paper November 2010. Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics An Oracle White Paper November 2010 Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics 1 Introduction New applications such as web searches, recommendation engines,

More information

Running Jobs on Blue Waters. Jing Li, Omar Padron

Running Jobs on Blue Waters. Jing Li, Omar Padron Running Jobs on Blue Waters Jing Li, Omar Padron Jobs on Blue Waters Jobs on Blue Waters are managed through the use of: Resource manager: TORQUE Workload manager: Moab Commands for managing jobs on Blue

More information

PBS (Portable Batch System) Basics for UNIX

PBS (Portable Batch System) Basics for UNIX PBS (Portable Batch System) Basics for UNIX 1. Submitting a job - the qsub command. qsub command is used for creating and running batch jobs. To create a job is to submit an executable script to a certain

More information

HPC system startup manual (version 1.30)

HPC system startup manual (version 1.30) HPC system startup manual (version 1.30) Document change log Issue Date Change 1 12/1/2012 New document 2 10/22/2013 Added the information of supported OS 3 10/22/2013 Changed the example 1 for data download

More information

Technical Requirements Guide

Technical Requirements Guide Technical Requirements Guide Contents Introduction... 2 Architecture and performance... 3 Technical Requirements... 4 Non-virtualised environment... 5 Client PC:... 5 Database Server:... 5 Virtualised

More information

Interacting with Moab/Torque using PBS directives. Michael Carlise Graduate Research Assistant Doctoral Candidate (Biology)

Interacting with Moab/Torque using PBS directives. Michael Carlise Graduate Research Assistant Doctoral Candidate (Biology) Interacting with Moab/Torque using PBS directives Michael Carlise Graduate Research Assistant Doctoral Candidate (Biology) Outline (1) What is Torque, Moab and PBS? (2) Batch Computer Systems (3) Basic

More information

DiskPulse DISK CHANGE MONITOR

DiskPulse DISK CHANGE MONITOR DiskPulse DISK CHANGE MONITOR User Manual Version 7.9 Oct 2015 www.diskpulse.com info@flexense.com 1 1 DiskPulse Overview...3 2 DiskPulse Product Versions...5 3 Using Desktop Product Version...6 3.1 Product

More information

File Transfer Examples. Running commands on other computers and transferring files between computers

File Transfer Examples. Running commands on other computers and transferring files between computers Running commands on other computers and transferring files between computers 1 1 Remote Login Login to remote computer and run programs on that computer Once logged in to remote computer, everything you

More information

Libo Sun. October 15, 2014

Libo Sun. October 15, 2014 libosun@rams.colostate.edu Department of Statistics Colorado State University October 15, 2014 Outline 1 What is and 2 on multi-core computers. 3 4 What is and Many statistical analysis tasks are computationally

More information

Overview. Introduction to Pacman. Login Node Usage. Tom Logan. PACMAN Penguin Computing Opteron Cluster

Overview. Introduction to Pacman. Login Node Usage. Tom Logan. PACMAN Penguin Computing Opteron Cluster Overview Introduction to Pacman Tom Logan Hardware Programming Environment Compilers Queueing System PACMAN Penguin Computing Opteron Cluster 12 Login Nodes: 2- Six core 2.2 GHz AMD Opteron Processors;

More information

SA-9600 Surface Area Software Manual

SA-9600 Surface Area Software Manual SA-9600 Surface Area Software Manual Version 4.0 Introduction The operation and data Presentation of the SA-9600 Surface Area analyzer is performed using a Microsoft Windows based software package. The

More information

Optimizing Performance. Training Division New Delhi

Optimizing Performance. Training Division New Delhi Optimizing Performance Training Division New Delhi Performance tuning : Goals Minimize the response time for each query Maximize the throughput of the entire database server by minimizing network traffic,

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

An Introduction to Parallel Computing With MPI. Computing Lab I

An Introduction to Parallel Computing With MPI. Computing Lab I An Introduction to Parallel Computing With MPI Computing Lab I The purpose of the first programming exercise is to become familiar with the operating environment on a parallel computer, and to create and

More information

Batch Scripts for RA & Mio

Batch Scripts for RA & Mio Batch Scripts for RA & Mio Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Jobs are Run via a Batch System Ra and Mio are shared resources Purpose: Give fair access to all users Have control over where jobs

More information

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz)

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team (support@nesi.org.nz) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

InventoryControl for use with QuoteWerks Quick Start Guide

InventoryControl for use with QuoteWerks Quick Start Guide InventoryControl for use with QuoteWerks Quick Start Guide Copyright 2013 Wasp Barcode Technologies 1400 10 th St. Plano, TX 75074 All Rights Reserved STATEMENTS IN THIS DOCUMENT REGARDING THIRD PARTY

More information

CLC Server Command Line Tools USER MANUAL

CLC Server Command Line Tools USER MANUAL CLC Server Command Line Tools USER MANUAL Manual for CLC Server Command Line Tools 2.5 Windows, Mac OS X and Linux September 4, 2015 This software is for research purposes only. QIAGEN Aarhus A/S Silkeborgvej

More information

Introduction to MSI* for PubH 8403

Introduction to MSI* for PubH 8403 Introduction to MSI* for PubH 8403 Sep 30, 2015 Nancy Rowe *The Minnesota Supercomputing Institute for Advanced Computational Research Overview MSI at a Glance MSI Resources Access System Access - Physical

More information

1 Introduction. 2 Installing Paraview locally. 3 Setting up the paraview client. 4 Setting up an SSH tunnel

1 Introduction. 2 Installing Paraview locally. 3 Setting up the paraview client. 4 Setting up an SSH tunnel 1 Introduction Paraview is a visualisation tool intended for handling very large data sets. In order to handle very large data sets it comes with the ability to be run remotely using the mutliple node

More information

Tamás Budavári / The Johns Hopkins University

Tamás Budavári / The Johns Hopkins University PRACTICAL SCIENTIFIC ANALYSIS OF BIG DATA RUNNING IN PARALLEL / The Johns Hopkins University 2 Parallelism Data parallel Same processing on different pieces of data Task parallel Simultaneous processing

More information

Specific Information for installation and use of the database Report Tool used with FTSW100 software.

Specific Information for installation and use of the database Report Tool used with FTSW100 software. Database Report Tool This manual contains: Specific Information for installation and use of the database Report Tool used with FTSW100 software. Database Report Tool for use with FTSW100 versions 2.01

More information