Introduction to Matlab Distributed Computing Server (MDCS) Dan Mazur and Pier-Luc St-Onge December 1st, 2015

Size: px
Start display at page:

Download "Introduction to Matlab Distributed Computing Server (MDCS) Dan Mazur and Pier-Luc St-Onge guillimin@calculquebec.ca December 1st, 2015"

Transcription

1 Introduction to Matlab Distributed Computing Server (MDCS) Dan Mazur and Pier-Luc St-Onge December 1st,

2 Partners and sponsors 2

3 Exercise 0: Login and Setup Ubuntu login: Username: csuser07 Example hand-out slip: 07: k41a0?wy# Guillimin login: ssh Password: k41a0?wy# 3

4 Outline Introduction and Overview Configuring MDCS for Guillimin Submitting and monitoring jobs on Guillimin batch command Parallel toolbox parfor loops (parallel for loops) spmd sections (single program multiple data) distributed arrays (large memory problems) GPUs and Xeon Phis 4

5 Parallel Computing Toolbox (PCT) High-level constructs for parallel programming parallel for loops distributed arrays data parallel (spmd) sections Implicit (automatic) parallelism Implemented with MPI (MPICH2) Restricted to 12 cores on a single node Multi-node scalability built into MPICH2 Scalability intentionally limited through technological effort 5

6 MDCS Overview MDCS allows parallel toolbox users access to a number of workers (set by the license terms) on any number of nodes 6

7 MDCS vs. PCT differences MDCS jobs are submitted to the batch system on a cluster, not run locally Client - Server model In PCT, one explicitly starts a parpool environment In MDCS, this environment is requested in the batch() command 7

8 MDCS Overview Guillimin Worker Nodes MDCS.m script + attached files Matlab Your PC 8

9 MDCS Overview Guillimin Worker Nodes Job MDCS Scheduler.m script + attached files Matlab Your PC 9

10 MDCS Overview Guillimin Worker Nodes Job MDCS Monitoring information Scheduler.m script + attached files Matlab Your PC 10

11 MDCS Overview Guillimin Worker Nodes Job MDCS Monitoring information Scheduler.m script + attached files Important: Do not attach large data files. Data transfer to and from Guillimin is best accomplished with scp or sftp. See for large file transfers. Matlab Your PC 11

12 MDCS Licensing One N-worker MDCS Job N x MDCS worker licenses Desktop Matlab license 1 master process worker license Parallel computing toolbox license Pool of 64 MDCS licenses Additional toolbox licenses Provided by user (often via institution) Provided by McGill HPC 12

13 MDCS Scenario Researchers begin using desktop Matlab using institutional licenses Eventually, researchers and research programs depend on the resulting software Problem sizes increase with time, eventually necessitating parallel computing No problem: Mathworks uses an implementation of MPI with good scaling behaviour provided by the free software community to implement their parallel computing toolbox functionality But, place restrictions on number of nodes and cores Require additional licenses to remove these restrictions Because of decisions they made years ago, researchers find themselves facing either Potentially expensive license fees to unlock their software's capabilities, or Financial and time barriers to switching vendors (i.e. porting code) 13

14 MDCS Alternatives Compile MPI functions with mex Use Matlab MPI - Use global file system for MPI-like communication Low performance for tightly-coupled problems Use GNU Octave Reduces the switching costs by re-implementing the Matlab programming language Parallel capabilities are less mature than Matlab Porting code to another language (Python, R, Fortran, etc.) Difficult to maintain, cannot use PCT functions, cannot use Matlab debugger, must have access to many individual Matlab licenses (e.g. TAH license) Significant effort and time Contact us for help and advice guillimin@calculquebec.ca 14

15 MDCS Desktop Configuration 1) Install scripts used for communicating with scheduler 2) Configure the cluster profile 3) Verify your setup 15

16 Exercise 1: Install Scripts Download and unpack.tar.gz configuration file on your local machine E.g. Linux: cd <workdir> wget \ tar -xvf guillimin_mdcs_config_v2.3.tar.gz copy all "config/toolbox-local/* " files to the "<your_matlab_install>/toolbox/local " folder on your local machine Start or restart Matlab. Then test your installation: >> glmnversion 16

17 Permissions What if you don't have write access to the toolbox/local folder? Create a new folder in your home directory for Matlab scripts Add the new path to your Matlab path path('newpath', path); Set new path in a startup.m file Use MATLABPATH environment variable in Mac and Linux OSs 17

18 MDCS Integration Scripts glmncommsubfcn.m glmnindsubfcn.m glmngetremoteconn.m glmndeletejobfcn.m glmngetjobstatefcn.m glmnpbs.m glmncreatesubscript.m Main drivers for submitting jobs Establishes connection to cluster with ssh Cancel job on cluster through Matlab Get the job status from the cluster Specifies the submission parameters Creates a script which will run on cluster to submit the job glmngensubmitstring.m Generates the qsub command glmnextractjobid.m Gets the PBS jobid from the cluster glmncommjobwrapper.sh glmnindjobwrapper.sh The script that is submitted to the worker nodes by qsub 18

19 Avoiding Metadata Corruption Each pair (Server, Matlab installation) requires a pair of metadata folders, one on the submitting computer and one on Guillimin E.g. installing a new version of Matlab and re-using the same metadata folders will result in corruption E.g. Submitting to a new MDCS server and re-using the same metadata folders will result in corruption E.g. Multiple users from the same client will require a shared metadata folder (read and write) or separate profiles Important: You cannot re-use your class account configuration for other Guillimin accounts 19

20 How many metadata folders? Servers: guillimin R2013a Clients: orcinus R2014a Lab computer R2013a Home computer 20

21 How many metadata folders? Answer: 12 Servers: guillimin R2013a Clients: orcinus R2014a Lab computer R2013a Home computer 21

22 Exercise 2: Configure your computer We have made a script, glmnconfigcluster.m, to make configuration easier Warning: glmnconfigcluster will overwrite any profiles called 'guillimin' >> glmnconfigcluster Enter a unique name for your local computer (e.g. the hostname): workshop Home directory on local computer (e.g. /home/alex, /Users/alex, or C:\\Users\\alex): /Users/dmazur Home directory on guillimin (e.g. /home/alex): /home/dmazur One last step: please connect to guillimin, and create your Matlab job directory: mkdir -p /home/dmazur/.matlab/jobs/workshop/guillimin/r2014a Once done, your local computer will be configured to submit jobs to guillimin. 22

23 Exercise 3: Validation You will want to test your new cluster with simple tests before trying more complicated codes Clicking the validation button in Matlab can take a long time and the final test is expected to fail Perform the validation procedure from the McGill HPC documentation Must be performed in the TestParfor directory cd examples/testparfor In glmnpbs.m, set procspernode to 3 23

24 A simple batch job mycluster = parcluster('guillimin') j = batch(mycluster,...) Selects a cluster profile Submits jobs to a cluster Prompted for username Select 'no' when asked to use identity file Prompted for password wait(j): Waits for job to finish 24

25 Exercise 4: Simple Batch Job >> mycluster = parcluster('guillimin') >> j = 1, {10, 10}, 'CurrentDirectory', '.'); >> wait(j) >> r = fetchoutputs(j) 25

26 glmnpbs.m For parallel jobs, we have a script (glmnpbs.m) to make job submission easier Place this script in your working directory Before submission, check that you have a valid glmnpbs.m file, and that your submission parameters are correct >> test = glmnpbs(); >> test.getsubmitargs() 26

27 classdef glmnpbs %Guillimin PBS submission arguments properties % Local script, remote working directory (home, by default) localscript = 'TestParfor'; workingdirectory = '.'; % nodes, ppn, gpus, phis and other attributes numberofnodes = 1; procspernode = 3; gpus = 0; phis = 0; attributes = ''; % Specify the memory per process required pmem = '1700m' % Requested walltime walltime = '00:30:00' % Please use metaq unless you require a specific node type queue = 'metaq' % All jobs should specify an account or RAPid: % e.g. % account = 'xyz-123-aa' account = ''; % You may use otheroptions to append a string to the qsub command % e.g. % otheroptions = '-M [at]address.com -m bae' otheroptions = '' end 27

28 Submitting with glmnpbs.m methods(static) function job = submitto(cluster) opt = glmnpbs(); job = batch(cluster, opt.localscript,... 'matlabpool', opt.getnbworkers(),... 'CurrentDirectory', opt.workingdirectory... ); end end >> cluster = parcluster('guillimin'); >> glmnpbs.submitto(cluster); Note that glmnpbs.m must be present for all job submissions, even with batch() Called by glmncommsubfcn.m 28

29 Matlab Job Monitor Parallel > Monitor Jobs Select Profile: guillimin Enter username Select 'no' Enter password Tip: Set autoupdate to 'never', or use an identity file. Otherwise, Matlab interrupts your work with password requests. 29

30 Matlab Job Monitor Job Monitor can report the state, and more details such as output and errors (right click). 30

31 Monitoring Jobs on Guillimin Show running and queued jobs qstat -u class01 qstat shows both MDCS and other Guillimin jobs Detailed scheduler information for job w/ jobid=######## qstat -f ######## Meta-data is stored in job-specific folders /home/username/.matlab/jobs/workshop/guillimin/r2014a/ Job1 The.log files contain output and error from Matlab itself The.txt files contain output from disp() and fprintf() You should create output and save matlab (.mat) files within your Guillimin storage (scratch, home, or project spaces) fprintf() save() 31

32 Exercise 5: Submit Parallel Job Change the working directory to the examples/testparfor folder you copied from the.tar.gz configuration file Launch TestParFor.m using glmnpbs.m >> cluster = parcluster('guillimin') >> job = glmnpbs.submitto(cluster) 32

33 Make sure you are in the correct directory >> cluster = parcluster('guillimin') >> job = glmnpbs.submitto(cluster) This script runs for ~15 minutes. You may use showq or the job monitor to monitor its progress. 33

34 Exercise Codes While your job is waiting/running... Please download and extract the exercise codes from our website intro_mdcs/dec2015.tar.gz 34

35 Parallel Matlab Benefits of parallelism Computations complete faster Scale to larger data sets in the same amount of time Work with larger data sets using distributed memory 35

36 Parallel Matlab Implicit (automatic) parallelism Bioinformatics toolbox Image processing toolbox optimization toolbox signal processing toolbox statistics toolbox etc... Explicit parallelism parallel toolbox parfor spmd distributed() 36

37 TestParfor.m function TestParfor; clear all; N=4000; filename='~/output_test_parfor.txt'; outfile = fopen(filename,'w'); fprintf(outfile, 'CALCULATION LOG: \n\n'); Location of output file on Guillimin tic; for k=1:10 Ham(:,:,k)=rand(N)+i*rand(N); fprintf(outfile,'serial: Doing K-point : %3i\n', k); inv(ham(:,:,k)); end t2=toc; Serial 'for' loop executed on head processor fprintf(outfile, 'Time serial = %12f\n', t2); fclose(outfile); tic; parfor k=1:10 Ham(:,:,k)=rand(N)+i*rand(N); outfile = fopen(filename,'a'); fprintf(outfile,'parallel: Doing K-point : %3i\n', k); fclose(outfile); inv(ham(:,:,k)); end Parallel 'parfor' loop executed on 2 worker nodes t2=toc; outfile = fopen(filename,'a'); fprintf(outfile, 'Time parallel = %12f\n', t2); fprintf(outfile, 'CALCULATIONS DONE... \n\n'); fclose(outfile); 37

38 Parfor i=1 i=2 i=3 i=4 Serial for loop Time i=1 i=2 i=3 i=4 Time Parallel parfor loop with 4 workers 38

39 ~/output_test_parfor.txt CALCULATION LOG: Serial: Doing K-point : 1 Serial: Doing K-point : 2 Serial: Doing K-point : 3 Serial: Doing K-point : 4 Serial: Doing K-point : 5 Serial: Doing K-point : 6 Serial: Doing K-point : 7 Serial: Doing K-point : 8 Serial: Doing K-point : 9 Serial: Doing K-point : 10 Time serial = Parallel: Doing K-point : 7 Parallel: Doing K-point : 4 Parallel: Doing K-point : 6 Parallel: Doing K-point : 3 Parallel: Doing K-point : 5 Parallel: Doing K-point : 2 Parallel: Doing K-point : 1 Parallel: Doing K-point : 9 Parallel: Doing K-point : 8 Parallel: Doing K-point : 10 Time parallel = CALCULATIONS DONE... Serial 'for' loop executed on head processor Parallel 'parfor' loop executed on 2 worker nodes Ideal speedup = 2.00X Actual speedup = 1.90X 39

40 Parfor loops Loop index must be consecutive integers Iterations must be independent from one another Local or temporary variables modified inside the parfor loop can't be used after the for loop Cannot nest parfor loops Cannot be altered in the loop Don't need to be the outermost for loop Matlab editor will automatically warn about problems 40

41 Load Balancing Each iteration of the for loop should do an equal amount of work Good load balancing: Bad load balancing: parfor i = 1: 40 x = rand(1000, 1000); inv(x); end parfor i = 1: 40 x = rand(100*i, 100*i); inv(x); end 40th iteration has much more work than 1st iteration 41

42 Parallel Reduction >> s = 0; >> parfor i = 1:40 >> s = s + 1; >> end >> disp(s) 820 Operation will be done 'atomically' Operation must be associative e.g. addition or multiplication not subtraction or division 42

43 Aside: Atomic Operations >> s = 0; >> parfor i = 1:40 >> s = s + 1; >> end >> disp(s) 820 Step 1: Read s from memory Step 2: add 1 Step 3: Store result in s non-atomic addition Worker 1 s Worker

44 Aside: Atomic Operations >> s = 0; >> parfor i = 1:40 >> s = s + 1; >> end >> disp(s) 820 Step 1: Read s from memory Step 2: add 1 Step 3: Store result in s non-atomic addition Worker 1 s atomic addition Worker 2 Worker Worker s

45 Aside: Atomic Operations >> s = 0; >> parfor i = 1:40 >> s = s + 1; >> end >> disp(s) 820 Step 1: Read s from memory Step 2: add 1 Step 3: Store result in s atomic addition Worker 1 s Worker 2 0 Matlab calls 's' a 'reduction variable' and these operations are automatically atomic. distcomp/reduction-variables.html

46 Parallel Concatenation >> y = []; >> parfor i = 1:10 >> y = [y, i] ; >> end >> disp(y) Matrix is stored in 'correct' order according to index i

47 Parameter Sweep Damped harmonic oscillator Give initial velocity for a variety of k's and b's and watch maximum response amplitude 47

48 Exercise 6: Parameter Sweep paramsweep.m solves a second-order ordinary differential equation (ODE) for varying parameter values Modify this code to run in parallel on 2 workers Submit your modified code to the MDCS Retrieve the resulting plot from Guillimin using scp and view it on your laptop [laptop]$ scp \ class07@guillimin.hpc.mcgill.ca:~/paramsweep.png./ 48

49 49

50 Single Program Multiple Data spmd command allows each worker to execute the same program on different data Variables labindex and numlabs are (for example) used to index the data Automatically defined inside SPMD sections Functions labsend() and labreceive() are used to send and receive data between the workers 50

51 >> matlabpool(3) % Or parpool(3) on newer versions of Matlab Starting matlabpool using the 'local' profile... connected to 3 workers. >> spmd labindex >> q end Lab 1: q= ans = 1 Lab 2: ans = 2 Lab 1: class = double, size = [3 3] Lab 2: class = double, size = [4 4] Lab 3: class = double, size = [5 5] >> q{1} ans = Lab 3: ans = >> q{2} 3 >> spmd q = magic(labindex + 2); end ans =

52 SPMD data load example SPMD can be used to have each worker process data from separate files Example, process data stored in files datafile1.mat, datafile2.mat, etc... spmd infile = load(['datafile' num2str(labindex) '.mat']); result = myfunc(infile) end 52

53 Serial numerical integration m = 10; b = pi/2; dx = b/m; x = dx/2:dx:b-dx/2; int = sum(cos(x)*dx) 53

54 SPMD Integral We would like to parallelize this integral using spmd In terms of m, b, numlabs and labindex: How many increments per lab? Integration length per lab? Local integration range? We can use gplus() to perform a global sum over workers 54

55 SPMD Integral We would like to parallelize this integral using spmd In terms of m, b, numlabs and labindex: How many increments per lab? Integration length per lab? n = m / numlabs Delta = dx * n = (b / m) * (m / numlabs) = b / numlabs Local integration range? ai = (labindex 1) * Delta bi = labindex * Delta We can use gplus(int, 1) to perform a global sum over int from each worker 55

56 SPMD Integral e.g.) m = 10, numlabs = 5 n = 10/5 = 2 increments per lab Delta = (pi/2)/5 = pi/10 ai = (labindex-1)*pi/10 bi = labindex*pi/10 Sum over increments for a worker: int = sum(cos(x)*dx); Global sum over all workers: int = gplus(int, 1); 56

57 Exercise 7: Numerical Integration integration.m is a serial numerical integration program Modify this code to run in parallel using the spmd command Submit your modified code to the MDCS using 2 workers 57

58 Distributed Arrays 1 matlabpool(4) A = distributed( [ a b c d; e f g h; i j k l; m n o p]); 2 [a; e; i; m] [b; f; j; n] [c; g; k; o] [d; h; l; p] 3 4 MDCS Workers 58

59 Distributed Arrays Allow large data sets to be distributed over multiple nodes Distributed by columns Can be constructed by partitioning a large array already in memory combining smaller arrays into one large array using distributed matrix constructor functions (distributed.rand(), distributed.zeros(), etc.) Operations on distributed arrays are automatically parallelized Arrays do not persist if the matlabpool is closed 59

60 Codistributed Arrays Codistributed arrays provide much more control over how arrays are distributed Can be distributed by any dimension Can distribute different amounts of data to different workers Codistributed arrays can be declared inside spmd sections 60

61 Exercise 8: Matrix Multiplication matrixmul.m is a serial matrix multiplication Modify this file to use distributed arrays create distributed random arrays a, b time a matrix multiplication: tic; c = a*b; toc Submit the job for 1 worker and then for 4 workers What is the speedup (serial time / parallel time)? 61

62 Using GPUs with Matlab The Parallel Computing Toolbox can utilize CUDA-capable GPUs on the system (e.g. the K20s on Guillimin) GPU-enabled functions fft, filter toolbox functions Linear-algebra operations Custom CUDA kernels.cu or.ptx formats 62

63 GPU Arrays Matlab can copy arrays to the GPU Perform matrix operations on the GPU to speed them up e.g. x = rand(1000, 'single', 'gpuarray'); x2 = x.*x; %performed on GPU 63

64 Exercise 9: GPU Job fourier.m is a serial fast Fourier transform (FFT) code Modify this file to perform the same calculation using normal and GPU Arrays Use tic and toc to time both operations and output the results Submit this job to a Guillimin GPU node Hint: Simply request in glmnpbs.m numberofnodes = 1; procspernode = 1; gpus = 1; What is the speedup from the GPU? 64

65 Summary Today we learned: How to configure a desktop installation of Matlab to submit jobs to a cluster computer using MDCS How to submit jobs to a cluster and monitor their output How to write parallel Matlab applications using parfor, spmd, and distributed arrays Many Matlab programs can be parallelized with a very small change Note that parallel programming is a huge topic and we have only scratched the surface! 65

66 Questions What questions do you have? 66

67 Using Xeon Phi with Matlab Matlab uses the Intel MKL math library Version >= 11.0 of MKL has automatic offloading to Xeon Phi Included in Matlab R2014a and newer On Guillimin: module add ifort_icc export MKL_MIC_MAX_MEMORY=16G export MKL_MIC_ENABLE=1 matlab & 67

High-Performance Computing

High-Performance Computing High-Performance Computing Windows, Matlab and the HPC Dr. Leigh Brookshaw Dept. of Maths and Computing, USQ 1 The HPC Architecture 30 Sun boxes or nodes Each node has 2 x 2.4GHz AMD CPUs with 4 Cores

More information

Matlab on a Supercomputer

Matlab on a Supercomputer Matlab on a Supercomputer Shelley L. Knuth Research Computing April 9, 2015 Outline Description of Matlab and supercomputing Interactive Matlab jobs Non-interactive Matlab jobs Parallel Computing Slides

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

Guillimin HPC Users Meeting. Bryan Caron

Guillimin HPC Users Meeting. Bryan Caron November 13, 2014 Bryan Caron bryan.caron@mcgill.ca bryan.caron@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Outline Compute Canada News October Service Interruption

More information

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing

More information

Parallel Computing with MATLAB

Parallel Computing with MATLAB Parallel Computing with MATLAB Scott Benway Senior Account Manager Jiro Doke, Ph.D. Senior Application Engineer 2013 The MathWorks, Inc. 1 Acceleration Strategies Applied in MATLAB Approach Options Best

More information

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Miami University RedHawk Cluster Working with batch jobs on the Cluster Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

Calcul Parallèle sous MATLAB

Calcul Parallèle sous MATLAB Calcul Parallèle sous MATLAB Journée Calcul Parallèle GPU/CPU du PEPI MACS Olivier de Mouzon INRA Gremaq Toulouse School of Economics Lundi 28 novembre 2011 Paris Présentation en grande partie fondée sur

More information

MATLAB Distributed Computing Server Cloud Center User s Guide

MATLAB Distributed Computing Server Cloud Center User s Guide MATLAB Distributed Computing Server Cloud Center User s Guide How to Contact MathWorks Latest news: Sales and services: User community: Technical support: www.mathworks.com www.mathworks.com/sales_and_services

More information

MATLAB on EC2 Instructions Guide

MATLAB on EC2 Instructions Guide MATLAB on EC2 Instructions Guide Contents Welcome to MATLAB on EC2...3 What You Need to Do...3 Requirements...3 1. MathWorks Account...4 1.1. Create a MathWorks Account...4 1.2. Associate License...4 2.

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

Parallel Computing with Mathematica UVACSE Short Course

Parallel Computing with Mathematica UVACSE Short Course UVACSE Short Course E Hall 1 1 University of Virginia Alliance for Computational Science and Engineering uvacse@virginia.edu October 8, 2014 (UVACSE) October 8, 2014 1 / 46 Outline 1 NX Client for Remote

More information

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Pierre-Yves Taunay Research Computing and Cyberinfrastructure 224A Computer Building The Pennsylvania State University University

More information

Martinos Center Compute Clusters

Martinos Center Compute Clusters Intro What are the compute clusters How to gain access Housekeeping Usage Log In Submitting Jobs Queues Request CPUs/vmem Email Status I/O Interactive Dependencies Daisy Chain Wrapper Script In Progress

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

EVALUATION ONLY. WA2088 WebSphere Application Server 8.5 Administration on Windows. Student Labs. Web Age Solutions Inc.

EVALUATION ONLY. WA2088 WebSphere Application Server 8.5 Administration on Windows. Student Labs. Web Age Solutions Inc. WA2088 WebSphere Application Server 8.5 Administration on Windows Student Labs Web Age Solutions Inc. Copyright 2013 Web Age Solutions Inc. 1 Table of Contents Directory Paths Used in Labs...3 Lab Notes...4

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

High-Performance Reservoir Risk Assessment (Jacta Cluster)

High-Performance Reservoir Risk Assessment (Jacta Cluster) High-Performance Reservoir Risk Assessment (Jacta Cluster) SKUA-GOCAD 2013.1 Paradigm 2011.3 With Epos 4.1 Data Management Configuration Guide 2008 2013 Paradigm Ltd. or its affiliates and subsidiaries.

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

HPC Wales Skills Academy Course Catalogue 2015

HPC Wales Skills Academy Course Catalogue 2015 HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses

More information

Editing Locally and Using SFTP: the FileZilla-Sublime-Terminal Flow

Editing Locally and Using SFTP: the FileZilla-Sublime-Terminal Flow Editing Locally and Using SFTP: the FileZilla-Sublime-Terminal Flow Matthew Salim, 20 May 2016 This guide focuses on effective and efficient offline editing on Sublime Text. The key is to use SFTP for

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

HPC system startup manual (version 1.30)

HPC system startup manual (version 1.30) HPC system startup manual (version 1.30) Document change log Issue Date Change 1 12/1/2012 New document 2 10/22/2013 Added the information of supported OS 3 10/22/2013 Changed the example 1 for data download

More information

Online Backup Client User Manual

Online Backup Client User Manual Online Backup Client User Manual Software version 3.21 For Linux distributions January 2011 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have

More information

Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc.

Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc. Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc. 2015 The MathWorks, Inc. 1 Challenges of Big Data Any collection of data sets so large and complex that it becomes difficult

More information

Installing and running COMSOL on a Linux cluster

Installing and running COMSOL on a Linux cluster Installing and running COMSOL on a Linux cluster Introduction This quick guide explains how to install and operate COMSOL Multiphysics 5.0 on a Linux cluster. It is a complement to the COMSOL Installation

More information

Debugging and Profiling Lab. Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma carlos@tacc.utexas.edu

Debugging and Profiling Lab. Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma carlos@tacc.utexas.edu Debugging and Profiling Lab Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma carlos@tacc.utexas.edu Setup Login to Ranger: - ssh -X username@ranger.tacc.utexas.edu Make sure you can export graphics

More information

MATLAB Distributed Computing Server with HPC Cluster in Microsoft Azure

MATLAB Distributed Computing Server with HPC Cluster in Microsoft Azure MATLAB Distributed Computing Server with HPC Cluster in Microsoft Azure Introduction This article shows you how to deploy the MATLAB Distributed Computing Server (hereinafter referred to as MDCS) with

More information

Speed up numerical analysis with MATLAB

Speed up numerical analysis with MATLAB 2011 Technology Trend Seminar Speed up numerical analysis with MATLAB MathWorks: Giorgia Zucchelli Marieke van Geffen Rachid Adarghal TU Delft: Prof.dr.ir. Kees Vuik Thales Nederland: Dènis Riedijk 2011

More information

Running Knn Spark on EC2 Documentation

Running Knn Spark on EC2 Documentation Pseudo code Running Knn Spark on EC2 Documentation Preparing to use Amazon AWS First, open a Spark launcher instance. Open a m3.medium account with all default settings. Step 1: Login to the AWS console.

More information

Parallel Debugging with DDT

Parallel Debugging with DDT Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece

More information

What s New in MATLAB and Simulink

What s New in MATLAB and Simulink What s New in MATLAB and Simulink Kevin Cohan Product Marketing, MATLAB Michael Carone Product Marketing, Simulink 2015 The MathWorks, Inc. 1 What was new for Simulink in R2012b? 2 What Was New for MATLAB

More information

Using Parallel Computing to Run Multiple Jobs

Using Parallel Computing to Run Multiple Jobs Beowulf Training Using Parallel Computing to Run Multiple Jobs Jeff Linderoth August 5, 2003 August 5, 2003 Beowulf Training Running Multiple Jobs Slide 1 Outline Introduction to Scheduling Software The

More information

Partek Flow Installation Guide

Partek Flow Installation Guide Partek Flow Installation Guide Partek Flow is a web based application for genomic data analysis and visualization, which can be installed on a desktop computer, compute cluster or cloud. Users can access

More information

Backing Up CNG SAFE Version 6.0

Backing Up CNG SAFE Version 6.0 Backing Up CNG SAFE Version 6.0 The CNG-Server consists of 3 components. 1. The CNG Services (Server, Full Text Search and Workflow) 2. The data file repository 3. The SQL Server Databases The three services

More information

USB HSPA Modem. User Manual

USB HSPA Modem. User Manual USB HSPA Modem User Manual Congratulations on your purchase of this USB HSPA Modem. The readme file helps you surf the Internet, send and receive SMS, manage contacts and use many other functions with

More information

Overview of HPC Resources at Vanderbilt

Overview of HPC Resources at Vanderbilt Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. September 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 25 Course

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

Local Caching Servers (LCS): User Manual

Local Caching Servers (LCS): User Manual Local Caching Servers (LCS): User Manual Table of Contents Local Caching Servers... 1 Supported Browsers... 1 Getting Help... 1 System Requirements... 2 Macintosh... 2 Windows... 2 Linux... 2 Downloading

More information

Access Instructions for United Stationers ECDB (ecommerce Database) 2.0

Access Instructions for United Stationers ECDB (ecommerce Database) 2.0 Access Instructions for United Stationers ECDB (ecommerce Database) 2.0 Table of Contents General Information... 3 Overview... 3 General Information... 3 SFTP Clients... 3 Support... 3 WinSCP... 4 Overview...

More information

Download/Install IDENTD

Download/Install IDENTD Download/Install IDENTD IDENTD is the small software program that must be installed on each user s computer if multiple filters are to be used in ComSifter. The program may be installed and executed locally

More information

Bringing Big Data Modelling into the Hands of Domain Experts

Bringing Big Data Modelling into the Hands of Domain Experts Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks david.willingham@mathworks.com.au 2015 The MathWorks, Inc. 1 Data is the sword of the

More information

Online Backup Linux Client User Manual

Online Backup Linux Client User Manual Online Backup Linux Client User Manual Software version 4.0.x For Linux distributions August 2011 Version 1.0 Disclaimer This document is compiled with the greatest possible care. However, errors might

More information

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz)

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team (support@nesi.org.nz) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

Export & Backup Guide

Export & Backup Guide Eport & Backup Guide Welcome to the WebOffice and WorkSpace eport and backup guide. This guide provides an overview and requirements of the tools available to etract data from your WebOffice or WorkSpace

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de

More information

Cassandra Installation over Ubuntu 1. Installing VMware player:

Cassandra Installation over Ubuntu 1. Installing VMware player: Cassandra Installation over Ubuntu 1. Installing VMware player: Download VM Player using following Download Link: https://www.vmware.com/tryvmware/?p=player 2. Installing Ubuntu Go to the below link and

More information

RecoveryVault Express Client User Manual

RecoveryVault Express Client User Manual For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by

More information

Installation Manual v2.0.0

Installation Manual v2.0.0 Installation Manual v2.0.0 Contents ResponseLogic Install Guide v2.0.0 (Command Prompt Install)... 3 Requirements... 4 Installation Checklist:... 4 1. Download and Unzip files.... 4 2. Confirm you have

More information

Attix5 Pro Server Edition

Attix5 Pro Server Edition Attix5 Pro Server Edition V7.0.3 User Manual for Linux and Unix operating systems Your guide to protecting data with Attix5 Pro Server Edition. Copyright notice and proprietary information All rights reserved.

More information

Application Server Installation

Application Server Installation Application Server Installation Guide ARGUS Enterprise 11.0 11/25/2015 ARGUS Software An Altus Group Company Application Server Installation ARGUS Enterprise Version 11.0 11/25/2015 Published by: ARGUS

More information

Moving the TRITON Reporting Databases

Moving the TRITON Reporting Databases Moving the TRITON Reporting Databases Topic 50530 Web, Data, and Email Security Versions 7.7.x, 7.8.x Updated 06-Nov-2013 If you need to move your Microsoft SQL Server database to a new location (directory,

More information

SA-9600 Surface Area Software Manual

SA-9600 Surface Area Software Manual SA-9600 Surface Area Software Manual Version 4.0 Introduction The operation and data Presentation of the SA-9600 Surface Area analyzer is performed using a Microsoft Windows based software package. The

More information

User Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream

User Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream User Manual Onsight Management Suite Version 5.1 Another Innovation by Librestream Doc #: 400075-06 May 2012 Information in this document is subject to change without notice. Reproduction in any manner

More information

Job Scheduling with Moab Cluster Suite

Job Scheduling with Moab Cluster Suite Job Scheduling with Moab Cluster Suite IBM High Performance Computing February 2010 Y. Joanna Wong, Ph.D. yjw@us.ibm.com 2/22/2010 Workload Manager Torque Source: Adaptive Computing 2 Some terminology..

More information

VERSION 9.02 INSTALLATION GUIDE. www.pacifictimesheet.com

VERSION 9.02 INSTALLATION GUIDE. www.pacifictimesheet.com VERSION 9.02 INSTALLATION GUIDE www.pacifictimesheet.com PACIFIC TIMESHEET INSTALLATION GUIDE INTRODUCTION... 4 BUNDLED SOFTWARE... 4 LICENSE KEY... 4 SYSTEM REQUIREMENTS... 5 INSTALLING PACIFIC TIMESHEET

More information

Quick Tutorial for Portable Batch System (PBS)

Quick Tutorial for Portable Batch System (PBS) Quick Tutorial for Portable Batch System (PBS) The Portable Batch System (PBS) system is designed to manage the distribution of batch jobs and interactive sessions across the available nodes in the cluster.

More information

1. Product Information

1. Product Information ORIXCLOUD BACKUP CLIENT USER MANUAL LINUX 1. Product Information Product: Orixcloud Backup Client for Linux Version: 4.1.7 1.1 System Requirements Linux (RedHat, SuSE, Debian and Debian based systems such

More information

Online Backup Client User Manual Linux

Online Backup Client User Manual Linux Online Backup Client User Manual Linux 1. Product Information Product: Online Backup Client for Linux Version: 4.1.7 1.1 System Requirements Operating System Linux (RedHat, SuSE, Debian and Debian based

More information

Online Backup Client User Manual

Online Backup Client User Manual For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by

More information

NOC PS manual. Copyright Maxnet 2009 2015 All rights reserved. Page 1/45 NOC-PS Manuel EN version 1.3

NOC PS manual. Copyright Maxnet 2009 2015 All rights reserved. Page 1/45 NOC-PS Manuel EN version 1.3 NOC PS manual Copyright Maxnet 2009 2015 All rights reserved Page 1/45 Table of contents Installation...3 System requirements...3 Network setup...5 Installation under Vmware Vsphere...8 Installation under

More information

AWS Schema Conversion Tool. User Guide Version 1.0

AWS Schema Conversion Tool. User Guide Version 1.0 AWS Schema Conversion Tool User Guide AWS Schema Conversion Tool: User Guide Copyright 2016 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

CASHNet Secure File Transfer Instructions

CASHNet Secure File Transfer Instructions CASHNet Secure File Transfer Instructions Copyright 2009, 2010 Higher One Payments, Inc. CASHNet, CASHNet Business Office, CASHNet Commerce Center, CASHNet SMARTPAY and all related logos and designs are

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

Hadoop Installation MapReduce Examples Jake Karnes

Hadoop Installation MapReduce Examples Jake Karnes Big Data Management Hadoop Installation MapReduce Examples Jake Karnes These slides are based on materials / slides from Cloudera.com Amazon.com Prof. P. Zadrozny's Slides Prerequistes You must have an

More information

Using Symantec NetBackup with Symantec Security Information Manager 4.5

Using Symantec NetBackup with Symantec Security Information Manager 4.5 Using Symantec NetBackup with Symantec Security Information Manager 4.5 Using Symantec NetBackup with Symantec Security Information Manager Legal Notice Copyright 2007 Symantec Corporation. All rights

More information

PAN 2013 User Guide for the Virtual Machines

PAN 2013 User Guide for the Virtual Machines PAN 2013 User Guide for the Virtual Machines 1 Welcome Welcome to the PAN 2013 evaluation lab. A novelty this year is that PAN switches from the submission of runs (say, the output of a piece of software)

More information

SLURM Workload Manager

SLURM Workload Manager SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux

More information

Connecting to the School of Computing Servers and Transferring Files

Connecting to the School of Computing Servers and Transferring Files Connecting to the School of Computing Servers and Transferring Files Connecting This document will provide instructions on how to connect to the School of Computing s server. Connect Using a Mac or Linux

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

Beyond Windows: Using the Linux Servers and the Grid

Beyond Windows: Using the Linux Servers and the Grid Beyond Windows: Using the Linux Servers and the Grid Topics Linux Overview How to Login & Remote Access Passwords Staying Up-To-Date Network Drives Server List The Grid Useful Commands Linux Overview Linux

More information

RingStor User Manual. Version 2.1 Last Update on September 17th, 2015. RingStor, Inc. 197 Route 18 South, Ste 3000 East Brunswick, NJ 08816.

RingStor User Manual. Version 2.1 Last Update on September 17th, 2015. RingStor, Inc. 197 Route 18 South, Ste 3000 East Brunswick, NJ 08816. RingStor User Manual Version 2.1 Last Update on September 17th, 2015 RingStor, Inc. 197 Route 18 South, Ste 3000 East Brunswick, NJ 08816 Page 1 Table of Contents 1 Overview... 5 1.1 RingStor Data Protection...

More information

SAM XFile. Trial Installation Guide Linux. Snell OD is in the process of being rebranded SAM XFile

SAM XFile. Trial Installation Guide Linux. Snell OD is in the process of being rebranded SAM XFile SAM XFile Trial Installation Guide Linux Snell OD is in the process of being rebranded SAM XFile Version History Table 1: Version Table Date Version Released by Reason for Change 10/07/2014 1.0 Andy Gingell

More information

CycleServer Grid Engine Support Install Guide. version 1.25

CycleServer Grid Engine Support Install Guide. version 1.25 CycleServer Grid Engine Support Install Guide version 1.25 Contents CycleServer Grid Engine Guide 1 Administration 1 Requirements 1 Installation 1 Monitoring Additional OGS/SGE/etc Clusters 3 Monitoring

More information

Hadoop Basics with InfoSphere BigInsights

Hadoop Basics with InfoSphere BigInsights An IBM Proof of Technology Hadoop Basics with InfoSphere BigInsights Unit 4: Hadoop Administration An IBM Proof of Technology Catalog Number Copyright IBM Corporation, 2013 US Government Users Restricted

More information

AlienVault Unified Security Management (USM) 4.x-5.x. Deploying HIDS Agents to Linux Hosts

AlienVault Unified Security Management (USM) 4.x-5.x. Deploying HIDS Agents to Linux Hosts AlienVault Unified Security Management (USM) 4.x-5.x Deploying HIDS Agents to Linux Hosts USM 4.x-5.x Deploying HIDS Agents to Linux Hosts, rev. 2 Copyright 2015 AlienVault, Inc. All rights reserved. AlienVault,

More information

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz)

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz) Center for e-research (eresearch@nesi.org.nz) Outline 1 About Us About CER and NeSI The CS Team Our Facilities 2 Key Concepts What is a Cluster Parallel Programming Shared Memory Distributed Memory 3 Using

More information

Imaging Computing Server User Guide

Imaging Computing Server User Guide Imaging Computing Server User Guide PerkinElmer, Viscount Centre II, University of Warwick Science Park, Millburn Hill Road, Coventry, CV4 7HS T +44 (0) 24 7669 2229 F +44 (0) 24 7669 0091 E cellularimaging@perkinelmer.com

More information

PAN Virtual Machine User Guide for Software Submissions

PAN Virtual Machine User Guide for Software Submissions PAN Virtual Machine User Guide for Software Submissions 1 Welcome Welcome to the PAN evaluation lab. PAN ist the first lab to switch from the submission of runs (say, the output of a piece of software)

More information

Automation Engine 14. Troubleshooting

Automation Engine 14. Troubleshooting 4 Troubleshooting 2-205 Contents. Troubleshooting the Server... 3. Checking the Databases... 3.2 Checking the Containers...4.3 Checking Disks...4.4.5.6.7 Checking the Network...5 Checking System Health...

More information

Extending Remote Desktop for Large Installations. Distributed Package Installs

Extending Remote Desktop for Large Installations. Distributed Package Installs Extending Remote Desktop for Large Installations This article describes four ways Remote Desktop can be extended for large installations. The four ways are: Distributed Package Installs, List Sharing,

More information

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu Grid 101 Josh Hegie grid@unr.edu http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging

More information

StreamServe Persuasion SP4

StreamServe Persuasion SP4 StreamServe Persuasion SP4 Installation Guide Rev B StreamServe Persuasion SP4 Installation Guide Rev B 2001-2009 STREAMSERVE, INC. ALL RIGHTS RESERVED United States patent #7,127,520 No part of this document

More information

Setting up the Oracle Warehouse Builder Project. Topics. Overview. Purpose

Setting up the Oracle Warehouse Builder Project. Topics. Overview. Purpose Setting up the Oracle Warehouse Builder Project Purpose In this tutorial, you setup and configure the project environment for Oracle Warehouse Builder 10g Release 2. You create a Warehouse Builder repository

More information

What is new in Switch 12

What is new in Switch 12 What is new in Switch 12 New features and functionality: Remote Designer From this version onwards, you are no longer obliged to use the Switch Designer on your Switch Server. Now that we implemented the

More information

Advanced PBS Workflow Example Bill Brouwer 05/01/12 Research Computing and Cyberinfrastructure Unit, PSU wjb19@psu.edu

Advanced PBS Workflow Example Bill Brouwer 05/01/12 Research Computing and Cyberinfrastructure Unit, PSU wjb19@psu.edu Advanced PBS Workflow Example Bill Brouwer 050112 Research Computing and Cyberinfrastructure Unit, PSU wjb19@psu.edu 0.0 An elementary workflow All jobs consuming significant cycles need to be submitted

More information