Introductory Tutorial to Parallel and Distributed Computing Tools of cluster.tigem.it

Size: px
Start display at page:

Download "Introductory Tutorial to Parallel and Distributed Computing Tools of cluster.tigem.it"

Transcription

1 Introductory Tutorial to Parallel and Distributed Computing Tools of cluster.tigem.it

2 A Computer Cluster is a group of networked computers, working together closely The computer are called nodes Cluster

3 Cluster node Each cluster node contains one ore more CPUs, memory, disks, network interfaces, graphic adapter, Like your desktop computer Can execute programs without tying up your workstation

4 front-end node where users log in and interact with the system computing nodes execute users programs Terminology (cluster)

5 Users home directories The users home directories are hosted on the front-end that shares them with the computing nodes

6 cluster.tigem.it nodes specifications 28 x Nodes Dell Server PowerEdge 1750 CPU: 2 X Intel Xeon CPU 3.06GHz Memory: 2GB (node 18 4GB, front-end 8GB) Disk: 147GB OS: LINUX Distribution: CentOS ~ Redhat Enterprise

7 cluster.tigem.it specifications One cluster CPU: 56 X Intel Xeon CPU 3.06GHz Memory: 64GB Disk: 8700GB OS: LINUX Distribution: Rocks Cluster

8 Access to the cluster Login ssh/putty text based (faster) vnc for graphical (fast) X11 for graphical (slow) File Transfer scp text based Winscp/Cyberduck graphical

9 Problem How can we manage multi-users access to the cluster nodes? Users agreement? Assign subset of nodes to each user? Not feasible Not convenient We can use a resource management system

10 Terminology (resource management systems) Batch or batch processing, is the capability of running jobs outside of the interactive login session Job or batch job is the basic execution object managed by the batch subsystem A job is a collection of related processes which is managed as a whole. A job can often be thought of as a shell script

11 Terminology (resource management systems) Queue is an ordered collection of jobs within the batch queuing system Each queue has a set of associated attributes which determine what actions are performed upon each job within the queue Typical attributes include queue name, queue priority, resource limits, destination(s ) and job count limits Selection/scheduling of jobs depend by a central job manager

12 Using a resource manager system To execute our programs on the cluster we need to prepare a job Write a non-interactive shell script and enqueue it in the system Non interactive: input output and error streams are files

13 First simple script Sleep 10 seconds and print the hostname ~]$ cat hostname.sh #!/bin/sh sleep 10 /bin/hostname

14 First simple submission Submit the job to the serial queue ~]$ qsub -q serial hostname.sh 1447.cluster.tigem.it The output of the qsub comand is JobID (Job IDentifier) a unique value that identify your job inside the system

15 First simple status query Look at the job status with qstat ~]$ qstat Job id Name User Time Use S Queue cluster.tigem.it hostname.sh oliva 0 R serial qstat display jobs status sorted by JobID R means Running Q means Queued (man qstat for other values)

16 Job Completition When our simple job is completed we can find two files in our directory ~]$ ls hostname.sh.* hostname.sh.e1447 hostname.sh.o1447 The ${JobName}.e${JobID} contain the job standard error stream while ${JobName}.o${JobID} contains the job standard output Look inside them with cat!

17 Status of the queues The qstat command can also be used to check the queue status ~]$ qstat -q

18 Cancelling Jobs To cancel a job that is running or queued you must use the qdel command qdel accepts the JobID as argument [oliva@cluster ~]$ qdel 1448.cluster.tigem.it

19 Interactive Jobs qsub allows you to execute interactive jobs by using the -I option If your program is controlled by a grafical user interface you can also export the display with the -X option (like ssh) To run matlab on a dedicated node: [oliva@cluster ~]$ qsub -X -I -q serial qsub: waiting for job 1449.cluster.tigem.it to start qsub: job 1449.cluster.tigem.it ready [oliva@compute-0-14 ~]$ /share/apps/...

20 Interactive Jobs The use of Graphical User Interfaces on cluser nodes is HIGHLY DISCOURAGED!!!! You'd better use matlab from the terminal ~]$ /share/apps/matlab/bin/matlab -nodisplay Matlab>

21 Exclusive use of a cluster Node Every node our cluster is equipped with 2 CPUs therefore the job manager allocate 2 jobs on each node Torque allows you to use a node exclusively and ensure that only our job is executed on that node by specifying the option -W x="naccesspolicy:singlejob"

22 Batch Matlab Jobs To run your matlab program in a non iteractive batch job you need to invoke matlab with the -nodesktop option and redirect its standard input from the.m file [oliva@cluster ~]$ cat matlab.sh #!/bin/sh /usr/local/bin/matlab -nodesktop < /data/user/run1.m

23 Batch R Jobs To run your R program in a non iteractive batch job you need to invoke R with the CMD BATCH arguments, the name of the file containing the R code to be executed, options and the name of the output file [oliva@cluster ~]$ cat R.sh #!/bin/sh /usr/bin/r CMD BATCH script.r script.rout Syntax: R CMD BATCH [options] infile [outfile]

24 Job Array To submit large numbers of jobs based on the same job script, rather than repeatedly call qsub Allow the creation of multiple jobs with one qsub command New job naming convention that allows users to reference the entire set of jobs as a unit, or to reference one particular job from the set

25 Job Array To submit a job array use the -t option with a range of integers that can be combined in a comma separated list: Examples : -t or -t 1,10, [oliva@cluster ~]$ qsub -t q serial hostname.sh 1450.cluster.tigem.it Job id Name User Time Use S Queue cluster hostname.sh-1 oliva 0 Q default cluster hostname.sh-2 oliva 0 Q default ArrayID

26 PBS_ARRAYID Each job in a job array gets a unique ArrayID Use the ArrayID value in your script through the PBS_ARRAYID environment variable Example: Suppose you have 1000 jpg images named image-1.jpg image-2.jpg... and want to convert them in the png format: [oliva@cluster ~]$ cat image-processing.sh #!/bin/bash convert image-$pbs_arrayid.jpg image-$pbs_arrayid.png [oliva@cluster ~]$ qsub -t image-processing.sh

27 Matlab Parallel Computing Toolbox

28 Matlab PCT Architecture Parallel Computing Toolbox (PCT) allows you to offload work from one MATLAB session (the client) to other MATLAB sessions, called workers. Matlab Client Matlab Workers

29 Matlab PCT You can use multiple workers to take advantage of parallel processing You can use a worker to keep your MATLAB client session free for interactive work MATLAB Distributed Computing Server software allows you to run up to 54 workers on cluster.tigem.it

30 Matlab PCT use cases Parallel for-loops (parfor) Large Data Sets SPMD Pmode

31 Repetitive iterations Many applications involve multiple segments of repetitive code (for-loops) Parameter sweep applications: Many iterations A sweep might take a long time because it comprises many iterations. Each iteration by itself might not take long to execute, but to complete thousands or millions of iterations in serial could take a long time Long iterations A sweep might not have a lot of iterations, but each iteration could take a long time to run

32 parfor A parfor-loop do the same job as the standard MATLAB for-loop: executes a series of statements (the loop body) over a range of values Part of the parfor body is executed on the MATLAB client (where the parfor is issued) and part is executed in parallel on MATLAB workers Data is sent from the client to workers and the results are sent back to the client and pieced

33 Parfor execution Steps for and parfor code comparison for i=1:1024 A(i) = sin(i*2*pi/1024); end plot(a) matlabpool open local 3 parfor i=1:1024 A(i) = sin(i*2*pi/1024); end plot(a) matlabpool close To interactively run code that contains a parallel loop 1 open a MATLAB pool to reserve a collection of MATLAB workers

34 Parfor limitations You cannot use a parfor-loop when an iteration in your loop depends on the results of other iterations Each iteration must be independent of all others Since there is a communications cost involved in a parfor-loop, there might be no advantage to using one when you have only a small number of simple calculations

35 Single Program Multiple Data The single program multiple data (spmd) language construct allows the subsequent use of serial and parallel programming The spmd statement lets you define a block of code to run simultaneously on multiple workers (called Labs)

36 SPMD example This code create the same identity matrix of random size on all the Labs Selects the same random row on each Lab Select a different random row on each Lab matlabpool 4 i=randi(10,1) spmd R = eye(i); end j=randi(i,1) spmd R(j,:); k=randi(i,1) R(k,:); end

37 Labindex variable The Labs used for an spmd statement each have a unique value for labindex This lets you specify code to be run on only certain labs, or to customize execution, usually for the purpose of accessing unique data. spmd labdata = load(['datafile_' num2str(labindex) '.ascii']) result = MyFunction(labdata) end

38 Distributed Arrays You can create a distributed array in the MATLAB client, and its data is stored on the Labs of the open MATLAB pool A distributed array is distributed in one dimension, along the last nonsingleton dimension, and as evenly as possible along that dimension among the labs You cannot control the details of distribution when creating a distributed array

39 Distributed Arrays Example This code distribute the identity matrix among the Labs Multiply the row by labindex Reassemble the resulting distributed matrix T on the client W = eyes(4); W = distributed(w); spmd T = labindex*w; end T

40 Codistributed Arrays You can create a codistributed inside the Labs When creating a codistributed array, you can control all aspects of distribution, including dimensions and partitions

41 Codistributed VS Distributed Codistributed arrays are partitioned among the labs from which you execute code to create or manipulate them Distributed arrays are partitioned among labs from the client with the open MATLAB pool Both can be accessed and used in the client code almost like regular arrays

42 Create a Codistributed Array Using MATLAB Constructor Function like rand or zeros with the a codistributor object argument Partitioning a Larger Array that is replicated on all labs, and partition it so that the pieces are distributed across the labs Building from Smaller Arrays stored on each lab, and combine them so that each array becomes a segment of a larger codistributed array

43 Constructors Valid constructors are: cell, colon, eye, false, Inf, NaN, ones, rand, randn, sparse, speye, sprand, sprandn, true, zeros Check their syntax with: help codistributed.constructor Create a codistributed random matrix of size 100 with spmd T = codistributed.rand(100) end

44 Partitioning a Larger Array When you have sufficient memory to store the initial replicated array you can use the codistributed function to partition a large array among Labs spmd A = [11:18; 21:28; 31:38; 41:48]; D = codistributed(a); getlocalpart(d) end

45 Building from Smaller Arrays To save on memory, you can construct the smaller pieces (local part) on each lab first, and then combine them into a single array that is distributed across the labs matlabpool 3 spmd A = (labindex-1) * 10 + [ 1:5 ; 6:10 ]; R = codistributed.build(a, codistributor1d(1,[2 2 2],[6 5])) getlocalpart(r) C = codistributed.build(a, codistributor1d(2,[5 5 5],[2 15])) getlocalpart(c) end...

46 Codistributor1d Describes the distribution scheme Matrix Codistributed by 1 st dimension codistributor1d(1,[2 2 2],[6 5]) Pick 2 rows in the first lab, 2 in the second and 2 in the third Obtain a 6x5 codistributed matrix

47 Codistributor1d Describes the distribution scheme Matrix Codistributed by 2 nd dimension (columns) codistributor1d(2,[5 5 5],[2 15]) Pick 5 columns in the first lab, 5 in the second and 5 in the third Obtain a 2x15 codistributed matrix

48 pmode Like spmd, pmode lets you work interactively with a parallel job running simultaneously on several Labs Commands you type at the pmode prompt in the Parallel Command Window are executed on all labs at the same time In contrast to spmd, pmode provides a desktop with a display for each lab running the job, where you can enter commands, see results, access each lab s workspace, etc

49 pmode Pmode gives a separated view of the situation on the various Labs

50 dfeval The dfeval function allows you to evaluate a function in a cluster of workers You need to provide basic required information, such as the function to be evaluated, the number of tasks to divide the job into, and the variable into which the results are returned results = dfeval(@sum, {[1 1] [2 2] [3 3]}, 'Configuration', 'cluster.tigem.it')

51 dfeval Suppose the function myfun accepts three input arguments, and generates two output arguments The number of elements of the input argument cell arrays determines the number of tasks in the job [X, Y] = dfeval(@myfun, All input cell arrays must have the same {a1 a2 a3 a4}, {b1 b2 b3 b4}, {c1 c2 c3 c4}, number 'configuration','cluster.tigem.it, 'FileDependencies',myfun.m); of elements In this example, there are four tasks

52 dfeval results Results are stored this way X{1}, Y{1} myfun(a1, b1, c1) X{2}, Y{2} myfun(a2, b2, c2) X{3}, Y{3} myfun(a3, b3, c3) X{4}, Y{4} myfun(a4, b4, c4) Like you would have executed [X{1}, Y{1}] = myfun(a1, b1, c1); [X{2}, Y{2}] = myfun(a2, b2, c2); [X{3}, Y{3}] = myfun(a3, b3, c3); [X{4}, Y{4}] = myfun(a4, b4, c4);

53 Using torque inside Matlab Find a Job Manager Create a Job Create Tasks Submit a Job to the Job Queue Retrieve the Job s Results

54 Find a Job Manager Use findresource to load the configuration jm = findresource('scheduler','configuration','cluster.tigem.it') jm = PBS Scheduler Information ========================= Type : Torque ClusterSize : 27 DataLocation : /home/oliva HasSharedFilesystem : true - Assigned Jobs... Number Pending : 1 Number Queued : 0 Number Running : 0 Number Finished : 1

55 Create Job Use CreateJob to create a Matlab Job object that correspond to a Torque job job1=createjob(jobm) Job ID 10 Information ===================== - Data Dependencies UserName : oliva State : pending SubmitTime : StartTime : Running Duration : FileDependencies : {} PathDependencies : {}

56 Create Tasks Add Tasks to the job using CreateTask 1, {3,3}); 1, {3,3}); 1, {3,3}); 1, {3,3}); 1, {3,3});

57 Submit a Job to the Job Queue Submit your Job submit(job1) Retrieve it's output results = getalloutputarguments(job1); Delete Job's Data destroy(job1);

High-Performance Computing

High-Performance Computing High-Performance Computing Windows, Matlab and the HPC Dr. Leigh Brookshaw Dept. of Maths and Computing, USQ 1 The HPC Architecture 30 Sun boxes or nodes Each node has 2 x 2.4GHz AMD CPUs with 4 Cores

More information

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Miami University RedHawk Cluster Working with batch jobs on the Cluster Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

Introduction to Matlab Distributed Computing Server (MDCS) Dan Mazur and Pier-Luc St-Onge guillimin@calculquebec.ca December 1st, 2015

Introduction to Matlab Distributed Computing Server (MDCS) Dan Mazur and Pier-Luc St-Onge guillimin@calculquebec.ca December 1st, 2015 Introduction to Matlab Distributed Computing Server (MDCS) Dan Mazur and Pier-Luc St-Onge guillimin@calculquebec.ca December 1st, 2015 1 Partners and sponsors 2 Exercise 0: Login and Setup Ubuntu login:

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

Quick Tutorial for Portable Batch System (PBS)

Quick Tutorial for Portable Batch System (PBS) Quick Tutorial for Portable Batch System (PBS) The Portable Batch System (PBS) system is designed to manage the distribution of batch jobs and interactive sessions across the available nodes in the cluster.

More information

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing

More information

Matlab on a Supercomputer

Matlab on a Supercomputer Matlab on a Supercomputer Shelley L. Knuth Research Computing April 9, 2015 Outline Description of Matlab and supercomputing Interactive Matlab jobs Non-interactive Matlab jobs Parallel Computing Slides

More information

Parallel Computing with MATLAB

Parallel Computing with MATLAB Parallel Computing with MATLAB Scott Benway Senior Account Manager Jiro Doke, Ph.D. Senior Application Engineer 2013 The MathWorks, Inc. 1 Acceleration Strategies Applied in MATLAB Approach Options Best

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

Beyond Windows: Using the Linux Servers and the Grid

Beyond Windows: Using the Linux Servers and the Grid Beyond Windows: Using the Linux Servers and the Grid Topics Linux Overview How to Login & Remote Access Passwords Staying Up-To-Date Network Drives Server List The Grid Useful Commands Linux Overview Linux

More information

Introduction to Sun Grid Engine (SGE)

Introduction to Sun Grid Engine (SGE) Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems

More information

Job Scheduling with Moab Cluster Suite

Job Scheduling with Moab Cluster Suite Job Scheduling with Moab Cluster Suite IBM High Performance Computing February 2010 Y. Joanna Wong, Ph.D. yjw@us.ibm.com 2/22/2010 Workload Manager Torque Source: Adaptive Computing 2 Some terminology..

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew

More information

Grid Engine Users Guide. 2011.11p1 Edition

Grid Engine Users Guide. 2011.11p1 Edition Grid Engine Users Guide 2011.11p1 Edition Grid Engine Users Guide : 2011.11p1 Edition Published Nov 01 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the

More information

Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc.

Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc. Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc. 2015 The MathWorks, Inc. 1 Challenges of Big Data Any collection of data sets so large and complex that it becomes difficult

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

Chapter 2: Getting Started

Chapter 2: Getting Started Chapter 2: Getting Started Once Partek Flow is installed, Chapter 2 will take the user to the next stage and describes the user interface and, of note, defines a number of terms required to understand

More information

Bringing Big Data Modelling into the Hands of Domain Experts

Bringing Big Data Modelling into the Hands of Domain Experts Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks david.willingham@mathworks.com.au 2015 The MathWorks, Inc. 1 Data is the sword of the

More information

SGE Roll: Users Guide. Version @VERSION@ Edition

SGE Roll: Users Guide. Version @VERSION@ Edition SGE Roll: Users Guide Version @VERSION@ Edition SGE Roll: Users Guide : Version @VERSION@ Edition Published Aug 2006 Copyright 2006 UC Regents, Scalable Systems Table of Contents Preface...i 1. Requirements...1

More information

Martinos Center Compute Clusters

Martinos Center Compute Clusters Intro What are the compute clusters How to gain access Housekeeping Usage Log In Submitting Jobs Queues Request CPUs/vmem Email Status I/O Interactive Dependencies Daisy Chain Wrapper Script In Progress

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. September 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 25 Course

More information

High-Performance Reservoir Risk Assessment (Jacta Cluster)

High-Performance Reservoir Risk Assessment (Jacta Cluster) High-Performance Reservoir Risk Assessment (Jacta Cluster) SKUA-GOCAD 2013.1 Paradigm 2011.3 With Epos 4.1 Data Management Configuration Guide 2008 2013 Paradigm Ltd. or its affiliates and subsidiaries.

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

Running applications on the Cray XC30 4/12/2015

Running applications on the Cray XC30 4/12/2015 Running applications on the Cray XC30 4/12/2015 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch jobs on compute nodes

More information

Calcul Parallèle sous MATLAB

Calcul Parallèle sous MATLAB Calcul Parallèle sous MATLAB Journée Calcul Parallèle GPU/CPU du PEPI MACS Olivier de Mouzon INRA Gremaq Toulouse School of Economics Lundi 28 novembre 2011 Paris Présentation en grande partie fondée sur

More information

Introduction to Matlab

Introduction to Matlab Introduction to Matlab Social Science Research Lab American University, Washington, D.C. Web. www.american.edu/provost/ctrl/pclabs.cfm Tel. x3862 Email. SSRL@American.edu Course Objective This course provides

More information

Integrating VoltDB with Hadoop

Integrating VoltDB with Hadoop The NewSQL database you ll never outgrow Integrating with Hadoop Hadoop is an open source framework for managing and manipulating massive volumes of data. is an database for handling high velocity data.

More information

Introduction to the SGE/OGS batch-queuing system

Introduction to the SGE/OGS batch-queuing system Grid Computing Competence Center Introduction to the SGE/OGS batch-queuing system Riccardo Murri Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Oct. 6, 2011 The basic

More information

Using Parallel Computing to Run Multiple Jobs

Using Parallel Computing to Run Multiple Jobs Beowulf Training Using Parallel Computing to Run Multiple Jobs Jeff Linderoth August 5, 2003 August 5, 2003 Beowulf Training Running Multiple Jobs Slide 1 Outline Introduction to Scheduling Software The

More information

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es)

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es) Microsoft HPC V 1.0 José M. Cámara (checam@ubu.es) Introduction Microsoft High Performance Computing Package addresses computing power from a rather different approach. It is mainly focused on commodity

More information

On-demand (Pay-per-Use) HPC Service Portal

On-demand (Pay-per-Use) HPC Service Portal On-demand (Pay-per-Use) Portal Wang Junhong INTRODUCTION High Performance Computing, Computer Centre The Service Portal is a key component of the On-demand (pay-per-use) HPC service delivery. The Portal,

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

Batch Systems. provide a mechanism for submitting, launching, and tracking jobs on a shared resource

Batch Systems. provide a mechanism for submitting, launching, and tracking jobs on a shared resource PBS INTERNALS PBS & TORQUE PBS (Portable Batch System)-software system for managing system resources on workstations, SMP systems, MPPs and vector computers. It was based on Network Queuing System (NQS)

More information

Job scheduler details

Job scheduler details Job scheduler details Advanced Computing Center for Research & Education (ACCRE) Job scheduler details 1 / 25 Outline 1 Batch queue system overview 2 Torque and Moab 3 Submitting jobs (ACCRE) Job scheduler

More information

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu Grid 101 Josh Hegie grid@unr.edu http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging

More information

NYUAD HPC Center Running Jobs

NYUAD HPC Center Running Jobs NYUAD HPC Center Running Jobs 1 Overview... Error! Bookmark not defined. 1.1 General List... Error! Bookmark not defined. 1.2 Compilers... Error! Bookmark not defined. 2 Loading Software... Error! Bookmark

More information

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35 HPCC USER S GUIDE Version 1.2 July 2012 IITS (Research Support) Singapore Management University IITS, Singapore Management University Page 1 of 35 Revision History Version 1.0 (27 June 2012): - Modified

More information

MATLAB Distributed Computing Server Cloud Center User s Guide

MATLAB Distributed Computing Server Cloud Center User s Guide MATLAB Distributed Computing Server Cloud Center User s Guide How to Contact MathWorks Latest news: Sales and services: User community: Technical support: www.mathworks.com www.mathworks.com/sales_and_services

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

SLURM Workload Manager

SLURM Workload Manager SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux

More information

Installing and running COMSOL on a Linux cluster

Installing and running COMSOL on a Linux cluster Installing and running COMSOL on a Linux cluster Introduction This quick guide explains how to install and operate COMSOL Multiphysics 5.0 on a Linux cluster. It is a complement to the COMSOL Installation

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

GRID Computing: CAS Style

GRID Computing: CAS Style CS4CC3 Advanced Operating Systems Architectures Laboratory 7 GRID Computing: CAS Style campus trunk C.I.S. router "birkhoff" server The CAS Grid Computer 100BT ethernet node 1 "gigabyte" Ethernet switch

More information

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu An introduction to compute resources in Biostatistics Chris Scheller schelcj@umich.edu 1. Resources 1. Hardware 2. Account Allocation 3. Storage 4. Software 2. Usage 1. Environment Modules 2. Tools 3.

More information

Batch Scripts for RA & Mio

Batch Scripts for RA & Mio Batch Scripts for RA & Mio Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Jobs are Run via a Batch System Ra and Mio are shared resources Purpose: Give fair access to all users Have control over where jobs

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

Cisco Networking Academy Program Curriculum Scope & Sequence. Fundamentals of UNIX version 2.0 (July, 2002)

Cisco Networking Academy Program Curriculum Scope & Sequence. Fundamentals of UNIX version 2.0 (July, 2002) Cisco Networking Academy Program Curriculum Scope & Sequence Fundamentals of UNIX version 2.0 (July, 2002) Course Description: Fundamentals of UNIX teaches you how to use the UNIX operating system and

More information

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Pierre-Yves Taunay Research Computing and Cyberinfrastructure 224A Computer Building The Pennsylvania State University University

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

File Transfer Examples. Running commands on other computers and transferring files between computers

File Transfer Examples. Running commands on other computers and transferring files between computers Running commands on other computers and transferring files between computers 1 1 Remote Login Login to remote computer and run programs on that computer Once logged in to remote computer, everything you

More information

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF)

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) ALCF Resources: Machines & Storage Mira (Production) IBM Blue Gene/Q 49,152 nodes / 786,432 cores 768 TB of memory Peak flop rate:

More information

Introduction to Grid Engine

Introduction to Grid Engine Introduction to Grid Engine Workbook Edition 8 January 2011 Document reference: 3609-2011 Introduction to Grid Engine for ECDF Users Workbook Introduction to Grid Engine for ECDF Users Author: Brian Fletcher,

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

wu.cloud: Insights Gained from Operating a Private Cloud System

wu.cloud: Insights Gained from Operating a Private Cloud System wu.cloud: Insights Gained from Operating a Private Cloud System Stefan Theußl, Institute for Statistics and Mathematics WU Wirtschaftsuniversität Wien March 23, 2011 1 / 14 Introduction In statistics we

More information

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55% openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01 Executive

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

Maxwell compute cluster

Maxwell compute cluster Maxwell compute cluster An introduction to the Maxwell compute cluster Part 1 1.1 Opening PuTTY and getting the course materials on to Maxwell 1.1.1 On the desktop, double click on the shortcut icon for

More information

An Oracle White Paper November 2010. Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics

An Oracle White Paper November 2010. Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics An Oracle White Paper November 2010 Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics 1 Introduction New applications such as web searches, recommendation engines,

More information

Imaging Computing Server User Guide

Imaging Computing Server User Guide Imaging Computing Server User Guide PerkinElmer, Viscount Centre II, University of Warwick Science Park, Millburn Hill Road, Coventry, CV4 7HS T +44 (0) 24 7669 2229 F +44 (0) 24 7669 0091 E cellularimaging@perkinelmer.com

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

Tutorial-4a: Parallel (multi-cpu) Computing

Tutorial-4a: Parallel (multi-cpu) Computing HTTP://WWW.HEP.LU.SE/COURSES/MNXB01 Introduction to Programming and Computing for Scientists (2015 HT) Tutorial-4a: Parallel (multi-cpu) Computing Balazs Konya (Lund University) Programming for Scientists

More information

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz)

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team (support@nesi.org.nz) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda High Performance Computing with Sun Grid Engine on the HPSCC cluster Fernando J. Pineda HPSCC High Performance Scientific Computing Center (HPSCC) " The Johns Hopkins Service Center in the Dept. of Biostatistics

More information

Ra - Batch Scripts. Timothy H. Kaiser, Ph.D. tkaiser@mines.edu

Ra - Batch Scripts. Timothy H. Kaiser, Ph.D. tkaiser@mines.edu Ra - Batch Scripts Timothy H. Kaiser, Ph.D. tkaiser@mines.edu Jobs on Ra are Run via a Batch System Ra is a shared resource Purpose: Give fair access to all users Have control over where jobs are run Set

More information

SA-9600 Surface Area Software Manual

SA-9600 Surface Area Software Manual SA-9600 Surface Area Software Manual Version 4.0 Introduction The operation and data Presentation of the SA-9600 Surface Area analyzer is performed using a Microsoft Windows based software package. The

More information

Using Google Compute Engine

Using Google Compute Engine Using Google Compute Engine Chris Paciorek January 30, 2014 WARNING: This document is now out-of-date (January 2014) as Google has updated various aspects of Google Compute Engine. But it may still be

More information

PARALLELS SERVER BARE METAL 5.0 README

PARALLELS SERVER BARE METAL 5.0 README PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal

More information

HPC system startup manual (version 1.30)

HPC system startup manual (version 1.30) HPC system startup manual (version 1.30) Document change log Issue Date Change 1 12/1/2012 New document 2 10/22/2013 Added the information of supported OS 3 10/22/2013 Changed the example 1 for data download

More information

DiskPulse DISK CHANGE MONITOR

DiskPulse DISK CHANGE MONITOR DiskPulse DISK CHANGE MONITOR User Manual Version 7.9 Oct 2015 www.diskpulse.com info@flexense.com 1 1 DiskPulse Overview...3 2 DiskPulse Product Versions...5 3 Using Desktop Product Version...6 3.1 Product

More information

MFCF Grad Session 2015

MFCF Grad Session 2015 MFCF Grad Session 2015 Agenda Introduction Help Centre and requests Dept. Grad reps Linux clusters using R with MPI Remote applications Future computing direction Technical question and answer period MFCF

More information

Cluster Computing With R

Cluster Computing With R Cluster Computing With R Stowers Institute for Medical Research R/Bioconductor Discussion Group Earl F. Glynn Scientific Programmer 18 December 2007 1 Cluster Computing With R Accessing Linux Boxes from

More information

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz)

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz) Center for e-research (eresearch@nesi.org.nz) Outline 1 About Us About CER and NeSI The CS Team Our Facilities 2 Key Concepts What is a Cluster Parallel Programming Shared Memory Distributed Memory 3 Using

More information

An introduction to Fyrkat

An introduction to Fyrkat Cluster Computing May 25, 2011 How to get an account https://fyrkat.grid.aau.dk/useraccount How to get help https://fyrkat.grid.aau.dk/wiki What is a Cluster Anyway It is NOT something that does any of

More information

InventoryControl for use with QuoteWerks Quick Start Guide

InventoryControl for use with QuoteWerks Quick Start Guide InventoryControl for use with QuoteWerks Quick Start Guide Copyright 2013 Wasp Barcode Technologies 1400 10 th St. Plano, TX 75074 All Rights Reserved STATEMENTS IN THIS DOCUMENT REGARDING THIRD PARTY

More information

2015 The MathWorks, Inc. 1

2015 The MathWorks, Inc. 1 25 The MathWorks, Inc. 빅 데이터 및 다양한 데이터 처리 위한 MATLAB의 인터페이스 환경 및 새로운 기능 엄준상 대리 Application Engineer MathWorks 25 The MathWorks, Inc. 2 Challenges of Data Any collection of data sets so large and complex

More information

ORACLE NOSQL DATABASE HANDS-ON WORKSHOP Cluster Deployment and Management

ORACLE NOSQL DATABASE HANDS-ON WORKSHOP Cluster Deployment and Management ORACLE NOSQL DATABASE HANDS-ON WORKSHOP Cluster Deployment and Management Lab Exercise 1 Deploy 3x3 NoSQL Cluster into single Datacenters Objective: Learn from your experience how simple and intuitive

More information

Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers

Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers Enterprise Product Group (EPG) Dell White Paper By Todd Muirhead and Peter Lillian July 2004 Contents Executive Summary... 3 Introduction...

More information

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry New High-performance computing cluster: PAULI Sascha Frick Institute for Physical Chemistry 02/05/2012 Sascha Frick (PHC) HPC cluster pauli 02/05/2012 1 / 24 Outline 1 About this seminar 2 New Hardware

More information

Specific Information for installation and use of the database Report Tool used with FTSW100 software.

Specific Information for installation and use of the database Report Tool used with FTSW100 software. Database Report Tool This manual contains: Specific Information for installation and use of the database Report Tool used with FTSW100 software. Database Report Tool for use with FTSW100 versions 2.01

More information

MSU Tier 3 Usage and Troubleshooting. James Koll

MSU Tier 3 Usage and Troubleshooting. James Koll MSU Tier 3 Usage and Troubleshooting James Koll Overview Dedicated computing for MSU ATLAS members Flexible user environment ~500 job slots of various configurations ~150 TB disk space 2 Condor commands

More information

Rational Rational ClearQuest

Rational Rational ClearQuest Rational Rational ClearQuest Version 7.0 Windows Using Project Tracker GI11-6377-00 Rational Rational ClearQuest Version 7.0 Windows Using Project Tracker GI11-6377-00 Before using this information, be

More information

OLH: Oracle Loader for Hadoop OSCH: Oracle SQL Connector for Hadoop Distributed File System (HDFS)

OLH: Oracle Loader for Hadoop OSCH: Oracle SQL Connector for Hadoop Distributed File System (HDFS) Use Data from a Hadoop Cluster with Oracle Database Hands-On Lab Lab Structure Acronyms: OLH: Oracle Loader for Hadoop OSCH: Oracle SQL Connector for Hadoop Distributed File System (HDFS) All files are

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de

More information

Oracle EXAM - 1Z0-117. Oracle Database 11g Release 2: SQL Tuning. Buy Full Product. http://www.examskey.com/1z0-117.html

Oracle EXAM - 1Z0-117. Oracle Database 11g Release 2: SQL Tuning. Buy Full Product. http://www.examskey.com/1z0-117.html Oracle EXAM - 1Z0-117 Oracle Database 11g Release 2: SQL Tuning Buy Full Product http://www.examskey.com/1z0-117.html Examskey Oracle 1Z0-117 exam demo product is here for you to test the quality of the

More information

HP-UX Essentials and Shell Programming Course Summary

HP-UX Essentials and Shell Programming Course Summary Contact Us: (616) 875-4060 HP-UX Essentials and Shell Programming Course Summary Length: 5 Days Prerequisite: Basic computer skills Recommendation Statement: Student should be able to use a computer monitor,

More information

Obelisk: Summoning Minions on a HPC Cluster

Obelisk: Summoning Minions on a HPC Cluster Obelisk: Summoning Minions on a HPC Cluster Abstract In scientific research, having the ability to perform rigorous calculations in a bearable amount of time is an invaluable asset. Fortunately, the growing

More information

CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment

CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment James Devine December 15, 2008 Abstract Mapreduce has been a very successful computational technique that has

More information

UMass High Performance Computing Center

UMass High Performance Computing Center .. UMass High Performance Computing Center University of Massachusetts Medical School October, 2014 2 / 32. Challenges of Genomic Data It is getting easier and cheaper to produce bigger genomic data every

More information

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview Programming Hadoop 5-day, instructor-led BD-106 MapReduce Overview The Client Server Processing Pattern Distributed Computing Challenges MapReduce Defined Google's MapReduce The Map Phase of MapReduce

More information

Speed up numerical analysis with MATLAB

Speed up numerical analysis with MATLAB 2011 Technology Trend Seminar Speed up numerical analysis with MATLAB MathWorks: Giorgia Zucchelli Marieke van Geffen Rachid Adarghal TU Delft: Prof.dr.ir. Kees Vuik Thales Nederland: Dènis Riedijk 2011

More information

Agenda. Using HPC Wales 2

Agenda. Using HPC Wales 2 Using HPC Wales Agenda Infrastructure : An Overview of our Infrastructure Logging in : Command Line Interface and File Transfer Linux Basics : Commands and Text Editors Using Modules : Managing Software

More information