User s Manual

Size: px
Start display at page:

Download "Cluster@WU User s Manual"

Transcription

1 User s Manual Stefan Theußl Martin Pacala September 29, Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut für rechenintensive Methoden or FIRM for short) hosts a cluster of Intel workstations, known as cluster@wu. See The scope of this manual is limited to a general introduction of the cluster@wu and its usage. It assumes general basic linux knowledge on the part of the user. For a more comprehensive guide, including detailed instructions for windows users and users new to unix and optional arguments to available commands, please visit Suggestions and improvements to this manual (as well as the website manual) can be ed to statmath-it@wu.ac.at. 1.1 cluster@wu With a total of bit computation cores and a total of one terabyte of RAM, cluster@wu is well equipped to tackle challenging problems from various research areas. The high performance computing cluster consists of four parts: the cluster running the applications, which itself consists of 44 nodes, a login server, a file server and storage from a storage area network (SAN). The 44 nodes, each offering 12 cores for a total of 528 cores capable of processing jobs, are combined in the queue node.q. Table 1 provides a brief overview of the specs of each individual node. 1

2 node.q 44 nodes 2 Intel x5670 ( GHz 24 GB RAM Table 1: cluster@wu specification The file server (clusterfs.wu.ac.at) hosts the userdata, application data and the scheduling system Sun Grid Engine. This grid engine is responsible for job administration and supports the submission of serial tasks as well as parallel tasks. The login server (cluster.wu.ac.at) is the main entry point for application developers and cluster users. This server solely handles user authentication and execution of programs. This machine provides the cluster users with a platform for managing their computational intensive jobs. 2 Cluster Access To get access to cluster@wu a local unix account is needed. To acquire such an account send an to firm-sys@wu.ac.at. You will be notified by once your account is created. 2.1 Login To login to the cluster type ssh username@cluster.wu.ac.at into your unix shell (a number of ssh clients eg putty are available for Windows users). After authentication with your username and password a shell on the login server will become available. The user is provided with programs for editing and compiling as well as tools for managing jobs for the grid engine. The login server solely serves as an access point to the main cluster and therefore it should only be used for editing programs (eg installing R packages into your personal library), compiling small applications and managing jobs submitted to the grid engine. 2.2 Changing the password In order to change your password simply use the command passwd after logging into cluster@wu and then enter your password into the terminal. 2

3 3 Using the Cluster In this section a summary of the capabilities of the Sun Grid Engine (SGE) is presented as well as an overview of how to use this software. Sun Grid Engine is an open source cluster resource management and scheduling software. On cluster@wu version 6.0 of the Grid Engine manages the remote execution of cluster jobs. It can be obtained from the commercial version Sun N1 Grid Engine can be found on gridware/. 3.1 Definitions Before going into further details the user should be aware of certain often used terms: Nodes In terms of the SGE a node is referred to as one core in the cluster. We refer to a node as one rack unit. Each of the cluster@wu nodes has 12 cores of which each can process one job simultaneously. This can sometimes cause confusion between different texts dealing with clusters. Jobs are user requests for resources available in a grid. These are then evaluated by the SGE and distributed to the nodes for processing. Task are smaller parts of a job that can be separately processed on different nodes or cores. A single job can consist of many tasks (even thousands). Each of these tasks can be performing the similar or completely different calculations depending on the arguments passed to the SGE. Job Arguments Each job can be submitted with extra parameters which affect how the job gets processed. These arguments can be specified in the job submission file. 3.2 Fair Use In order to provide maximum flexibility, some aspects of the grid engine are set up with very few restriction. For expample there is no limit on how many jobs a user can start or have in the queue. However this also means that users need to write and submit their jobs in a way, which does not adversely affect the rights of other users to run jobs on the cluster. 3

4 For example, if you wish to start a job, which contains hundreds or even a few thousand tasks, which will occupy a significant amount of resources for extended periods of time, then submit the job with a reduced priority (see section on Arguments below) so other users jobs can get processed whenever any task is completed It is not allowed to start programs which are time or resource intensive on the login server (as this will have side effects to all users logged in). If such tasks are started anyway, they may be terminated by the administrators without notice. 3.3 How the Grid Engine Operates In general the grid engine has to match the available resources to the requests of the grid users. Therefore the grid engine is responsible for accepting jobs from the outside world delay a job until it can be run sending job from the holding area to an execution device (node) managing running jobs logging of jobs The User needs not to care about things like On which node should I run my tasks or How do I get the results of my computation back from one node to my home directory, etc. All this is handled via the grid engine. 3.4 Submission of a Job A simple example shows how one can submit a job on the WU cluster infrastructure: 1. Login to the cluster with your user (when using a terminal then using the following command) ssh yourusername@cluster.wu.ac.at 2. Create a text file on the cluster (we will call the file myjob) with the following content 4

5 #$ -N firstjob R-i --vanilla < myrprogram.r 3. Submit the job to the grid engine with the following command qsub myjob This will queue the job and run it immediately whenever a node is free. 3.5 Output For every submitted and processed job the system creates 2 files. Each filename consists of the name of the job, distinction if it contains the output or the errors ("e" and "o" respectively) and the job ID. The submitted myjob from the previous example would hence result in the following files being created: firstjob.oxxxxx : Starts with a prologue with some meta information of the job and then continues the actual output (only the standard output) and ends with an epilogue (contains runtime etc.) firstjob.exxxxx : Contains any errors encountered XXXXX refers to the job id. Note that the output is cached while the job is running and will be displayed in the 2 files with a delay (or immediately if the job finishes). 3.6 Deletion of a Job To delete a job use the qdel command. 3.7 Monitoring Job/cluster status Statistics of the jobs running on cluster@wu can be obtained via calling qstat. An overview of all jobs/nodes and their utilization can be obtained using the sjs and sns commands. 5

6 3.8 Summary of Grid Engine Arguments A job begins typically with commands to the grid engine. These commands start with #$ followed by the argument as described below and then the desired value. One or more arguments can be passed to the grid engine: -N defines the actual jobname that will be displayed in the queue and on the output and error files -m bea this tells the system to send an when the job starts (b) / ends (e) / is aborted (a) -M followed by the address -q selects one of currently two node queues (either bignode.q or node.q). Default is node.q -pe [type] [n] starts up a parallel environment [type] reserving [n] cores. 3.9 Job best practices & what to avoid While it is possible to run jobs on nodes which then spawn further processes (see Fair Use section above for an explanation), please refrain from doing so if you didn t reserve the appropriate amount of cores on a node you will be using (such as submitting a job in a parallel environment, see example below). Otherwise the Grid Engine might try to allocate further jobs to a particular node even though it is already is running at maximum capacity. It might also be tempting to have your jobs write data back into your home directory (which are remotely mounted on each node when needed) as the job gets processed. This isn t an issue if done in a limited fashion but if done excessively with hundreds of simultaneous jobs can cause the fileserver to become unresponsive. This then results in both jobs being unable to pass the data they need to the fileserver as well as users being unable to access their own home directories upon logging in to the cluster (a typical symptom would be a user who normally uses ssh-key based authentification being asked to input their password, since due to the unresponsiveness of the fileserver, their public key cannot get accessed). Instead make use of the local storage in the /tmp/ directory of each node (approx GB) as well as the /scratch/ directory which is mounted on each node and allows for cross-node access to data and have your scripts write into these directories instead of your home directory. Once you have larger chunks of data ready, then you can have them get copied to your home directory. 6

7 3.10 Troubleshooting & further details Troubleshooting issues and further details are outside the scope of this manual. Please refer to the website at for more Information. If that is also insufficient to resolve your issues then you are welcome to contact the cluster admins at 4 Job Examples 4.1 A Simple Job What follows is a simple job without any parameters to the SGE. The shell commands date is run then after a pause of 30 seconds this command is run again. # print date and time date # sleep for 30 seconds sleep 30 # print date and time again date 4.2 Compilation of Applications on the Cluster The following job starts remote compilation on cluster@wu. The arguments to the grid engine define the address of the user to whom a mail should be sent. The flag -m e causes that the is sent at the end of the job. #$ -N compile-application #$ -M userl@example.org #$ -m e #$ -q node.q cd /path/to/src echo "#### clean ####" make clean echo "#### configure ####" 7

8 ./configure CC=icc CXX=icpc FC=f77 --prefix=/path/to/local/lib/ echo "#### make ####" make all echo "#### install ####" make install echo "#### finished ####" 4.3 Same Job with Different Parameters This is a common used possibility to (pseudo) parallelize tasks. For one job different tasks are executed. The key is an environment variable called SGE TASK ID. For a range of task numbers provided by the -t argument a task is started running the given job having access to a unique environment variable which identifies this task. To illustrate this way of job creation see the following task: #$ -N R_alternatives -t 1:10 R-i --vanilla <<-EOF run <- as.integer(sys.getenv("sge_task_id")) param <- expand.grid(mu=c(0.01, 0.025, 0.05, 0.075, 0.1), sd = c(0.04, 0.1)) param vec <- rnorm(50, param[[run,1]], param[[run,2]]) mean(vec) sd(vec) EOF For each task ID a vector with 50 normally distributed pseudo random numbers is generated. The parameters for the normal distribution are chosen using the SGE TASK ID environment variable. 4.4 openmpi/parallel Job The grid engine helps the user with setting up parallel environments. The -pe argument followed by the desired parallel environment (e.g., orte, PVM) informs the grid engine to start the specified environment. 8

9 #$ -N RMPI #$ -pe orte 20 #$ -q node.q # Job for using the MPI implementation LAM on 20 Nodes mpirun -np 20 /path/to/lam/executable 4.5 PVM Job #$ -N pvm-example #$ -pe pvm 5 #$ -q node.q /path/to/pvm/executable 5 Available Software In this section a summary of available programs is given. system is Debian GNU Linux ( The operating 9

10 R R-i R compiled with Intel compiler and linked against libgoto R-g R compiled with the default settings Compiler gcc GNU C Compiler, Stallman and the GCC Developer Community (2007) g++ GNU C++ Compiler, gfortran GNU FORTRAN Compiler icc Intel C Compiler, Intel corporation (2007a) icpc Intel C++ Compiler, Intel corporation (2007a) ifort Intel FORTRANCompiler, Intel corporation (2007b) Editor emacs vi/vim nano joe Scientific octave HPC LAM/MPI version this is for running MPI programs PVM Parallel Virtual Machine version 3.4 Table 2: Available software A.bashrc Modifications In this appendix parts of the.bashrc are explained which enables specific functionality on the cluster. Keep in mind that jobs need to be modified to specifically include your.bashrc by adding #!/bin/sh to the beginning of the file which contains your job information and instructions. A.1 Enable openmpi To get the MPI wrappers and libraries add the following to your.bashrc: export LD_LIBRARY_PATH=/opt/libs/openmpi INTEL /lib:$LD_LIBRARY_PATH export PATH=$PATH:/opt/libs/openmpi INTEL /bin 10

11 A.2 Enable PVM To enable PVM add the following to your.bashrc: # PVM> # you may wish to use this for your own programs (edit the last # part to point to a different directory f.e. ~/bin/_$pvm_arch. # if [ -z $PVM_ROOT ]; then if [ -d /usr/lib/pvm3 ]; then export PVM_ROOT=/usr/lib/pvm3 else echo "Warning - PVM_ROOT not defined" echo "To use PVM, define PVM_ROOT and rerun your.bashrc" fi fi if [ -n $PVM_ROOT ]; then export PVM_ARCH= $PVM_ROOT/lib/pvmgetarch # # uncomment one of the following lines if you want the PVM commands # directory to be added to your shell path. # # export PATH=$PATH:$PVM_ROOT/lib # generic export PATH=$PATH:$PVM_ROOT/lib/$PVM_ARCH # arch-specific # # uncomment the following line if you want the PVM executable directory # to be added to your shell path. # export PATH=$PATH:$PVM_ROOT/bin/$PVM_ARCH fi A.3 Local R Library Only the administrator has write access to the site library and not all packages that users may require are pre-installed. For this reason one should create their own package library in their home directory. Since the home directories are exported to all nodes during execution of jobs the personal package library will also be available as well. To do this create a directory in your home folder and add the following to your.bashrc file: # R package directory export R_LIBS=~/path/to/R/lib If upon your next login to the cluster you start R (see the Section Available Software above) and your newly created folder gets displayed as primary library then the.bashrc modification worked. 11

12 References Richard Stallman and the GCC Developer Community. Using the GNU Compiler Collection. The Free Software Foundation, URL Intel Corporation Intel C++ Compiler Documentation. Intel Corporation 2007a. URL Intel Corporation Intel C++ Compiler Documentation. Intel Corporation 2007b. URL 12

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu Grid 101 Josh Hegie grid@unr.edu http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina High Performance Computing Facility Specifications, Policies and Usage Supercomputer Project Bibliotheca Alexandrina Bibliotheca Alexandrina 1/16 Topics Specifications Overview Site Policies Intel Compilers

More information

SGE Roll: Users Guide. Version @VERSION@ Edition

SGE Roll: Users Guide. Version @VERSION@ Edition SGE Roll: Users Guide Version @VERSION@ Edition SGE Roll: Users Guide : Version @VERSION@ Edition Published Aug 2006 Copyright 2006 UC Regents, Scalable Systems Table of Contents Preface...i 1. Requirements...1

More information

Installing and running COMSOL on a Linux cluster

Installing and running COMSOL on a Linux cluster Installing and running COMSOL on a Linux cluster Introduction This quick guide explains how to install and operate COMSOL Multiphysics 5.0 on a Linux cluster. It is a complement to the COMSOL Installation

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

The CNMS Computer Cluster

The CNMS Computer Cluster The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the

More information

Introduction to Sun Grid Engine (SGE)

Introduction to Sun Grid Engine (SGE) Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

Beyond Windows: Using the Linux Servers and the Grid

Beyond Windows: Using the Linux Servers and the Grid Beyond Windows: Using the Linux Servers and the Grid Topics Linux Overview How to Login & Remote Access Passwords Staying Up-To-Date Network Drives Server List The Grid Useful Commands Linux Overview Linux

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

Grid Engine Users Guide. 2011.11p1 Edition

Grid Engine Users Guide. 2011.11p1 Edition Grid Engine Users Guide 2011.11p1 Edition Grid Engine Users Guide : 2011.11p1 Edition Published Nov 01 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the

More information

How To Run A Tompouce Cluster On An Ipra (Inria) 2.5.5 (Sun) 2 (Sun Geserade) 2-5.4 (Sun-Ge) 2/5.2 (

How To Run A Tompouce Cluster On An Ipra (Inria) 2.5.5 (Sun) 2 (Sun Geserade) 2-5.4 (Sun-Ge) 2/5.2 ( Running Hadoop and Stratosphere jobs on TomPouce cluster 16 October 2013 TomPouce cluster TomPouce is a cluster of 20 calcula@on nodes = 240 cores Located in the Inria Turing building (École Polytechnique)

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

High Performance Computing

High Performance Computing High Performance Computing at Stellenbosch University Gerhard Venter Outline 1 Background 2 Clusters 3 SU History 4 SU Cluster 5 Using the Cluster 6 Examples What is High Performance Computing? Wikipedia

More information

Notes on the SNOW/Rmpi R packages with OpenMPI and Sun Grid Engine

Notes on the SNOW/Rmpi R packages with OpenMPI and Sun Grid Engine Notes on the SNOW/Rmpi R packages with OpenMPI and Sun Grid Engine Last updated: 6/2/2008 4:43PM EDT We informally discuss the basic set up of the R Rmpi and SNOW packages with OpenMPI and the Sun Grid

More information

High-Performance Reservoir Risk Assessment (Jacta Cluster)

High-Performance Reservoir Risk Assessment (Jacta Cluster) High-Performance Reservoir Risk Assessment (Jacta Cluster) SKUA-GOCAD 2013.1 Paradigm 2011.3 With Epos 4.1 Data Management Configuration Guide 2008 2013 Paradigm Ltd. or its affiliates and subsidiaries.

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Miami University RedHawk Cluster Working with batch jobs on the Cluster Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew

More information

An introduction to Fyrkat

An introduction to Fyrkat Cluster Computing May 25, 2011 How to get an account https://fyrkat.grid.aau.dk/useraccount How to get help https://fyrkat.grid.aau.dk/wiki What is a Cluster Anyway It is NOT something that does any of

More information

CycleServer Grid Engine Support Install Guide. version 1.25

CycleServer Grid Engine Support Install Guide. version 1.25 CycleServer Grid Engine Support Install Guide version 1.25 Contents CycleServer Grid Engine Guide 1 Administration 1 Requirements 1 Installation 1 Monitoring Additional OGS/SGE/etc Clusters 3 Monitoring

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35 HPCC USER S GUIDE Version 1.2 July 2012 IITS (Research Support) Singapore Management University IITS, Singapore Management University Page 1 of 35 Revision History Version 1.0 (27 June 2012): - Modified

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

Technical Guide to ULGrid

Technical Guide to ULGrid Technical Guide to ULGrid Ian C. Smith Computing Services Department September 4, 2007 1 Introduction This document follows on from the User s Guide to Running Jobs on ULGrid using Condor-G [1] and gives

More information

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda High Performance Computing with Sun Grid Engine on the HPSCC cluster Fernando J. Pineda HPSCC High Performance Scientific Computing Center (HPSCC) " The Johns Hopkins Service Center in the Dept. of Biostatistics

More information

Prerequisites and Configuration Guide

Prerequisites and Configuration Guide Prerequisites and Configuration Guide Informatica Support Console (Version 2.0) Table of Contents Chapter 1: Overview.................................................... 2 Chapter 2: Minimum System Requirements.................................

More information

Using Parallel Computing to Run Multiple Jobs

Using Parallel Computing to Run Multiple Jobs Beowulf Training Using Parallel Computing to Run Multiple Jobs Jeff Linderoth August 5, 2003 August 5, 2003 Beowulf Training Running Multiple Jobs Slide 1 Outline Introduction to Scheduling Software The

More information

wu.cloud: Insights Gained from Operating a Private Cloud System

wu.cloud: Insights Gained from Operating a Private Cloud System wu.cloud: Insights Gained from Operating a Private Cloud System Stefan Theußl, Institute for Statistics and Mathematics WU Wirtschaftsuniversität Wien March 23, 2011 1 / 14 Introduction In statistics we

More information

Martinos Center Compute Clusters

Martinos Center Compute Clusters Intro What are the compute clusters How to gain access Housekeeping Usage Log In Submitting Jobs Queues Request CPUs/vmem Email Status I/O Interactive Dependencies Daisy Chain Wrapper Script In Progress

More information

Scheduling in SAS 9.3

Scheduling in SAS 9.3 Scheduling in SAS 9.3 SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc 2011. Scheduling in SAS 9.3. Cary, NC: SAS Institute Inc. Scheduling in SAS 9.3

More information

High-Performance Computing

High-Performance Computing High-Performance Computing Windows, Matlab and the HPC Dr. Leigh Brookshaw Dept. of Maths and Computing, USQ 1 The HPC Architecture 30 Sun boxes or nodes Each node has 2 x 2.4GHz AMD CPUs with 4 Cores

More information

MATLAB Distributed Computing Server Installation Guide. R2012a

MATLAB Distributed Computing Server Installation Guide. R2012a MATLAB Distributed Computing Server Installation Guide R2012a How to Contact MathWorks www.mathworks.com Web comp.soft-sys.matlab Newsgroup www.mathworks.com/contact_ts.html Technical Support suggest@mathworks.com

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

ORACLE NOSQL DATABASE HANDS-ON WORKSHOP Cluster Deployment and Management

ORACLE NOSQL DATABASE HANDS-ON WORKSHOP Cluster Deployment and Management ORACLE NOSQL DATABASE HANDS-ON WORKSHOP Cluster Deployment and Management Lab Exercise 1 Deploy 3x3 NoSQL Cluster into single Datacenters Objective: Learn from your experience how simple and intuitive

More information

WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide

WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide This document is intended to help you get started using WebSpy Vantage Ultimate and the Web Module. For more detailed information, please see

More information

Debugging and Profiling Lab. Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma carlos@tacc.utexas.edu

Debugging and Profiling Lab. Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma carlos@tacc.utexas.edu Debugging and Profiling Lab Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma carlos@tacc.utexas.edu Setup Login to Ranger: - ssh -X username@ranger.tacc.utexas.edu Make sure you can export graphics

More information

CLC Server Command Line Tools USER MANUAL

CLC Server Command Line Tools USER MANUAL CLC Server Command Line Tools USER MANUAL Manual for CLC Server Command Line Tools 2.5 Windows, Mac OS X and Linux September 4, 2015 This software is for research purposes only. QIAGEN Aarhus A/S Silkeborgvej

More information

IBM WebSphere Application Server Version 7.0

IBM WebSphere Application Server Version 7.0 IBM WebSphere Application Server Version 7.0 Centralized Installation Manager for IBM WebSphere Application Server Network Deployment Version 7.0 Note: Before using this information, be sure to read the

More information

Visualization Cluster Getting Started

Visualization Cluster Getting Started Visualization Cluster Getting Started Contents 1 Introduction to the Visualization Cluster... 1 2 Visualization Cluster hardware and software... 2 3 Remote visualization session through VNC... 2 4 Starting

More information

Batch Scheduling and Resource Management

Batch Scheduling and Resource Management Batch Scheduling and Resource Management Luke Tierney Department of Statistics & Actuarial Science University of Iowa October 18, 2007 Luke Tierney (U. of Iowa) Batch Scheduling and Resource Management

More information

High Performance Computing Cluster Quick Reference User Guide

High Performance Computing Cluster Quick Reference User Guide High Performance Computing Cluster Quick Reference User Guide Base Operating System: Redhat(TM) / Scientific Linux 5.5 with Alces HPC Software Stack Copyright 2011 Alces Software Ltd All Rights Reserved

More information

Installing a Symantec Backup Exec Agent on a SnapScale Cluster X2 Node or SnapServer DX1 or DX2. Summary

Installing a Symantec Backup Exec Agent on a SnapScale Cluster X2 Node or SnapServer DX1 or DX2. Summary Technical Bulletin Application Note April 2013 Installing a Symantec Backup Exec Agent on a SnapScale Cluster X2 Node or SnapServer DX1 or DX2 Summary This application note describes how to install the

More information

Cluster Computing With R

Cluster Computing With R Cluster Computing With R Stowers Institute for Medical Research R/Bioconductor Discussion Group Earl F. Glynn Scientific Programmer 18 December 2007 1 Cluster Computing With R Accessing Linux Boxes from

More information

Data management on HPC platforms

Data management on HPC platforms Data management on HPC platforms Transferring data and handling code with Git scitas.epfl.ch September 10, 2015 http://bit.ly/1jkghz4 What kind of data Categorizing data to define a strategy Based on size?

More information

Windows Clients and GoPrint Print Queues

Windows Clients and GoPrint Print Queues Windows Clients and GoPrint Print Queues Overview The following tasks demonstrate how to configure shared network printers on Windows client machines in a Windows Active Directory Domain and Workgroup

More information

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 What is ACENET? What is ACENET? Shared regional resource for... high-performance computing (HPC) remote collaboration

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de

More information

Tutorial Guide to the IS Unix Service

Tutorial Guide to the IS Unix Service Tutorial Guide to the IS Unix Service The aim of this guide is to help people to start using the facilities available on the Unix and Linux servers managed by Information Services. It refers in particular

More information

Grid Engine. Application Integration

Grid Engine. Application Integration Grid Engine Application Integration Getting Stuff Done. Batch Interactive - Terminal Interactive - X11/GUI Licensed Applications Parallel Jobs DRMAA Batch Jobs Most common What is run: Shell Scripts Binaries

More information

MPI / ClusterTools Update and Plans

MPI / ClusterTools Update and Plans HPC Technical Training Seminar July 7, 2008 October 26, 2007 2 nd HLRS Parallel Tools Workshop Sun HPC ClusterTools 7+: A Binary Distribution of Open MPI MPI / ClusterTools Update and Plans Len Wisniewski

More information

INSTALL AND CONFIGURATION GUIDE. Atlas 5.1 for Microsoft Dynamics AX

INSTALL AND CONFIGURATION GUIDE. Atlas 5.1 for Microsoft Dynamics AX INSTALL AND CONFIGURATION GUIDE Atlas 5.1 for Microsoft Dynamics AX COPYRIGHT NOTICE Copyright 2012, Globe Software Pty Ltd, All rights reserved. Trademarks Dynamics AX, IntelliMorph, and X++ have been

More information

Using the Windows Cluster

Using the Windows Cluster Using the Windows Cluster Christian Terboven terboven@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster

More information

Quick Tutorial for Portable Batch System (PBS)

Quick Tutorial for Portable Batch System (PBS) Quick Tutorial for Portable Batch System (PBS) The Portable Batch System (PBS) system is designed to manage the distribution of batch jobs and interactive sessions across the available nodes in the cluster.

More information

SparkLab May 2015 An Introduction to

SparkLab May 2015 An Introduction to SparkLab May 2015 An Introduction to & Apostolos N. Papadopoulos Assistant Professor Data Engineering Lab, Department of Informatics, Aristotle University of Thessaloniki Abstract Welcome to SparkLab!

More information

Scheduling in SAS 9.4 Second Edition

Scheduling in SAS 9.4 Second Edition Scheduling in SAS 9.4 Second Edition SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2015. Scheduling in SAS 9.4, Second Edition. Cary, NC: SAS Institute

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

Grid Engine 6. Troubleshooting. BioTeam Inc. info@bioteam.net

Grid Engine 6. Troubleshooting. BioTeam Inc. info@bioteam.net Grid Engine 6 Troubleshooting BioTeam Inc. info@bioteam.net Grid Engine Troubleshooting There are two core problem types Job Level Cluster seems OK, example scripts work fine Some user jobs/apps fail Cluster

More information

Extending Remote Desktop for Large Installations. Distributed Package Installs

Extending Remote Desktop for Large Installations. Distributed Package Installs Extending Remote Desktop for Large Installations This article describes four ways Remote Desktop can be extended for large installations. The four ways are: Distributed Package Installs, List Sharing,

More information

GRID Computing: CAS Style

GRID Computing: CAS Style CS4CC3 Advanced Operating Systems Architectures Laboratory 7 GRID Computing: CAS Style campus trunk C.I.S. router "birkhoff" server The CAS Grid Computer 100BT ethernet node 1 "gigabyte" Ethernet switch

More information

NYUAD HPC Center Running Jobs

NYUAD HPC Center Running Jobs NYUAD HPC Center Running Jobs 1 Overview... Error! Bookmark not defined. 1.1 General List... Error! Bookmark not defined. 1.2 Compilers... Error! Bookmark not defined. 2 Loading Software... Error! Bookmark

More information

INF-110. GPFS Installation

INF-110. GPFS Installation INF-110 GPFS Installation Overview Plan the installation Before installing any software, it is important to plan the GPFS installation by choosing the hardware, deciding which kind of disk connectivity

More information

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing

More information

24/08/2004. Introductory User Guide

24/08/2004. Introductory User Guide 24/08/2004 Introductory User Guide CSAR Introductory User Guide Introduction This material is designed to provide new users with all the information they need to access and use the SGI systems provided

More information

FileCruiser Backup & Restoring Guide

FileCruiser Backup & Restoring Guide FileCruiser Backup & Restoring Guide Version: 0.3 FileCruiser Model: VA2600/VR2600 with SR1 Date: JAN 27, 2015 1 Index Index... 2 Introduction... 3 Backup Requirements... 6 Backup Set up... 7 Backup the

More information

UMass High Performance Computing Center

UMass High Performance Computing Center .. UMass High Performance Computing Center University of Massachusetts Medical School October, 2014 2 / 32. Challenges of Genomic Data It is getting easier and cheaper to produce bigger genomic data every

More information

HPCC - Hrothgar Getting Started User Guide MPI Programming

HPCC - Hrothgar Getting Started User Guide MPI Programming HPCC - Hrothgar Getting Started User Guide MPI Programming High Performance Computing Center Texas Tech University HPCC - Hrothgar 2 Table of Contents 1. Introduction... 3 2. Setting up the environment...

More information

Configuring MailArchiva with Insight Server

Configuring MailArchiva with Insight Server Copyright 2009 Bynari Inc., All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any

More information

for Networks Installation Guide for the application on the server July 2014 (GUIDE 2) Lucid Rapid Version 6.05-N and later

for Networks Installation Guide for the application on the server July 2014 (GUIDE 2) Lucid Rapid Version 6.05-N and later for Networks Installation Guide for the application on the server July 2014 (GUIDE 2) Lucid Rapid Version 6.05-N and later Copyright 2014, Lucid Innovations Limited. All Rights Reserved Lucid Research

More information

Redtail CRM Integration. Users Guide. 2011 Cities Digital, Inc. All rights reserved. Contents i

Redtail CRM Integration. Users Guide. 2011 Cities Digital, Inc. All rights reserved. Contents i Redtail CRM Integration Users Guide 2011 Cities Digital, Inc. All rights reserved. Contents i Contents Redtail Integration with Laserfiche by Cities Digital 1 Overview... 1 Requirements 3 Minimum Server

More information

MATLAB & Git Versioning: The Very Basics

MATLAB & Git Versioning: The Very Basics 1 MATLAB & Git Versioning: The Very Basics basic guide for using git (command line) in the development of MATLAB code (windows) The information for this small guide was taken from the following websites:

More information

Introduction to the CRAY XE6(Lindgren) environment at PDC. Dr. Lilit Axner (PDC, Sweden)

Introduction to the CRAY XE6(Lindgren) environment at PDC. Dr. Lilit Axner (PDC, Sweden) Introduction to the CRAY XE6(Lindgren) environment at PDC Dr. Lilit Axner (PDC, Sweden) Lindgren System used after the summer school! Cray XE6 8 interactive nodes 1516 dedicated nodes (queue needed!) 24

More information

for Networks Installation Guide for the application on the server August 2014 (GUIDE 2) Lucid Exact Version 1.7-N and later

for Networks Installation Guide for the application on the server August 2014 (GUIDE 2) Lucid Exact Version 1.7-N and later for Networks Installation Guide for the application on the server August 2014 (GUIDE 2) Lucid Exact Version 1.7-N and later Copyright 2014, Lucid Innovations Limited. All Rights Reserved Lucid Research

More information

Wolfr am Lightweight Grid M TM anager USER GUIDE

Wolfr am Lightweight Grid M TM anager USER GUIDE Wolfram Lightweight Grid TM Manager USER GUIDE For use with Wolfram Mathematica 7.0 and later. For the latest updates and corrections to this manual: visit reference.wolfram.com For information on additional

More information

FileBench's Multi-Client feature

FileBench's Multi-Client feature FileBench's Multi-Client feature Filebench now includes facilities to synchronize workload execution on a set of clients, allowing higher offered loads to the server. While primarily intended for network

More information

On-demand (Pay-per-Use) HPC Service Portal

On-demand (Pay-per-Use) HPC Service Portal On-demand (Pay-per-Use) Portal Wang Junhong INTRODUCTION High Performance Computing, Computer Centre The Service Portal is a key component of the On-demand (pay-per-use) HPC service delivery. The Portal,

More information

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Pierre-Yves Taunay Research Computing and Cyberinfrastructure 224A Computer Building The Pennsylvania State University University

More information

How to Install SMTPSwith Mailer on Centos Server/VPS

How to Install SMTPSwith Mailer on Centos Server/VPS How to Install SMTPSwith Mailer on Centos Server/VPS SMTPSwitch Mailer User Guide V4.0 SMTPSwitch Mailer is a web based email marketing software that runs on a web server or online server. An online server

More information

Introduction to the SGE/OGS batch-queuing system

Introduction to the SGE/OGS batch-queuing system Grid Computing Competence Center Introduction to the SGE/OGS batch-queuing system Riccardo Murri Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Oct. 6, 2011 The basic

More information

PuTTY/Cygwin Tutorial. By Ben Meister Written for CS 23, Winter 2007

PuTTY/Cygwin Tutorial. By Ben Meister Written for CS 23, Winter 2007 PuTTY/Cygwin Tutorial By Ben Meister Written for CS 23, Winter 2007 This tutorial will show you how to set up and use PuTTY to connect to CS Department computers using SSH, and how to install and use the

More information

The SUN ONE Grid Engine BATCH SYSTEM

The SUN ONE Grid Engine BATCH SYSTEM The SUN ONE Grid Engine BATCH SYSTEM Juan Luis Chaves Sanabria Centro Nacional de Cálculo Científico (CeCalCULA) Latin American School in HPC on Linux Cluster October 27 November 07 2003 What is SGE? Is

More information

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry New High-performance computing cluster: PAULI Sascha Frick Institute for Physical Chemistry 02/05/2012 Sascha Frick (PHC) HPC cluster pauli 02/05/2012 1 / 24 Outline 1 About this seminar 2 New Hardware

More information

Site Configuration SETUP GUIDE. Linux Hosts Shared File Server Installation. May08. May 08

Site Configuration SETUP GUIDE. Linux Hosts Shared File Server Installation. May08. May 08 Site Configuration SETUP GUIDE Linux Hosts Shared File Server Installation May08 May 08 Copyright 2008 Wind River Systems, Inc. All rights reserved. No part of this publication may be reproduced or transmitted

More information

Using Google Compute Engine

Using Google Compute Engine Using Google Compute Engine Chris Paciorek January 30, 2014 WARNING: This document is now out-of-date (January 2014) as Google has updated various aspects of Google Compute Engine. But it may still be

More information

Contents. 1.1 Overview 1.2 Support. 2. Installation

Contents. 1.1 Overview 1.2 Support. 2. Installation - - Acknowledgments Many thanks to Owen Holland for feedback, support and advice about SpikeStream. The interface between SIMNOS and SpikeStream was developed in collaboration with Richard Newcombe, who

More information

Using Network Attached Storage with Linux. by Andy Pepperdine

Using Network Attached Storage with Linux. by Andy Pepperdine Using Network Attached Storage with Linux by Andy Pepperdine I acquired a WD My Cloud device to act as a demonstration, and decide whether to use it myself later. This paper is my experience of how to

More information

Bitrix Site Manager ASP.NET. Installation Guide

Bitrix Site Manager ASP.NET. Installation Guide Bitrix Site Manager ASP.NET Installation Guide Contents Introduction... 4 Chapter 1. Checking for IIS Installation... 5 Chapter 2. Using An Archive File to Install Bitrix Site Manager ASP.NET... 7 Preliminary

More information

Tivoli Access Manager Agent for Windows Installation Guide

Tivoli Access Manager Agent for Windows Installation Guide IBM Tivoli Identity Manager Tivoli Access Manager Agent for Windows Installation Guide Version 4.5.0 SC32-1165-03 IBM Tivoli Identity Manager Tivoli Access Manager Agent for Windows Installation Guide

More information

Parallel Debugging with DDT

Parallel Debugging with DDT Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece

More information

Virtual CD v10. Network Management Server Manual. H+H Software GmbH

Virtual CD v10. Network Management Server Manual. H+H Software GmbH Virtual CD v10 Network Management Server Manual H+H Software GmbH Table of Contents Table of Contents Introduction 1 Legal Notices... 2 What Virtual CD NMS can do for you... 3 New Features in Virtual

More information

SmartFiler Backup Appliance User Guide 2.0

SmartFiler Backup Appliance User Guide 2.0 SmartFiler Backup Appliance User Guide 2.0 SmartFiler Backup Appliance User Guide 1 Table of Contents Overview... 5 Solution Overview... 5 SmartFiler Backup Appliance Overview... 5 Getting Started... 7

More information