Cluster@WU User s Manual



Similar documents
Grid 101. Grid 101. Josh Hegie.

An Introduction to High Performance Computing in the Department

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Hodor and Bran - Job Scheduling and PBS Scripts

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Manual for using Super Computing Resources

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

SGE Roll: Users Guide. Version Edition

Installing and running COMSOL on a Linux cluster

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

The CNMS Computer Cluster

Introduction to Sun Grid Engine (SGE)

NEC HPC-Linux-Cluster

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

Beyond Windows: Using the Linux Servers and the Grid

Parallel Processing using the LOTUS cluster

Grid Engine Users Guide p1 Edition

How To Run A Tompouce Cluster On An Ipra (Inria) (Sun) 2 (Sun Geserade) (Sun-Ge) 2/5.2 (

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

High Performance Computing

Notes on the SNOW/Rmpi R packages with OpenMPI and Sun Grid Engine

High-Performance Reservoir Risk Assessment (Jacta Cluster)

Getting Started with HPC

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

An introduction to Fyrkat

CycleServer Grid Engine Support Install Guide. version 1.25

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

HPCC USER S GUIDE. Version 1.2 July IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

Technical Guide to ULGrid

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda

Prerequisites and Configuration Guide

Using Parallel Computing to Run Multiple Jobs

wu.cloud: Insights Gained from Operating a Private Cloud System

Martinos Center Compute Clusters

Scheduling in SAS 9.3

High-Performance Computing

MATLAB Distributed Computing Server Installation Guide. R2012a

The Asterope compute cluster

ORACLE NOSQL DATABASE HANDS-ON WORKSHOP Cluster Deployment and Management

WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide

Debugging and Profiling Lab. Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma

CLC Server Command Line Tools USER MANUAL

IBM WebSphere Application Server Version 7.0

Visualization Cluster Getting Started

Batch Scheduling and Resource Management

High Performance Computing Cluster Quick Reference User Guide

Installing a Symantec Backup Exec Agent on a SnapScale Cluster X2 Node or SnapServer DX1 or DX2. Summary

Cluster Computing With R

Data management on HPC platforms

Windows Clients and GoPrint Print Queues

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015

The RWTH Compute Cluster Environment

Tutorial Guide to the IS Unix Service

Grid Engine. Application Integration

MPI / ClusterTools Update and Plans

INSTALL AND CONFIGURATION GUIDE. Atlas 5.1 for Microsoft Dynamics AX

Using the Windows Cluster

Quick Tutorial for Portable Batch System (PBS)

SparkLab May 2015 An Introduction to

Scheduling in SAS 9.4 Second Edition

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Grid Engine 6. Troubleshooting. BioTeam Inc.

Extending Remote Desktop for Large Installations. Distributed Package Installs

GRID Computing: CAS Style

NYUAD HPC Center Running Jobs

INF-110. GPFS Installation

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

24/08/2004. Introductory User Guide

FileCruiser Backup & Restoring Guide

UMass High Performance Computing Center

HPCC - Hrothgar Getting Started User Guide MPI Programming

Configuring MailArchiva with Insight Server

for Networks Installation Guide for the application on the server July 2014 (GUIDE 2) Lucid Rapid Version 6.05-N and later

Redtail CRM Integration. Users Guide Cities Digital, Inc. All rights reserved. Contents i

MATLAB & Git Versioning: The Very Basics

Introduction to the CRAY XE6(Lindgren) environment at PDC. Dr. Lilit Axner (PDC, Sweden)

for Networks Installation Guide for the application on the server August 2014 (GUIDE 2) Lucid Exact Version 1.7-N and later

Wolfr am Lightweight Grid M TM anager USER GUIDE

FileBench's Multi-Client feature

On-demand (Pay-per-Use) HPC Service Portal

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises

How to Install SMTPSwith Mailer on Centos Server/VPS

Introduction to the SGE/OGS batch-queuing system

PuTTY/Cygwin Tutorial. By Ben Meister Written for CS 23, Winter 2007

The SUN ONE Grid Engine BATCH SYSTEM

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry

Site Configuration SETUP GUIDE. Linux Hosts Shared File Server Installation. May08. May 08

Using Google Compute Engine

Contents. 1.1 Overview 1.2 Support. 2. Installation

Using Network Attached Storage with Linux. by Andy Pepperdine

Bitrix Site Manager ASP.NET. Installation Guide

Tivoli Access Manager Agent for Windows Installation Guide

Parallel Debugging with DDT

Virtual CD v10. Network Management Server Manual. H+H Software GmbH

SmartFiler Backup Appliance User Guide 2.0

Transcription:

Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut für rechenintensive Methoden or FIRM for short) hosts a cluster of Intel workstations, known as cluster@wu. See http://www.wu.ac.at/firm. The scope of this manual is limited to a general introduction of the cluster@wu and its usage. It assumes general basic linux knowledge on the part of the user. For a more comprehensive guide, including detailed instructions for windows users and users new to unix and optional arguments to available commands, please visit http://statmath.wu.ac.at/cluster Suggestions and improvements to this manual (as well as the website manual) can be emailed to statmath-it@wu.ac.at. 1.1 cluster@wu With a total of 528 64-bit computation cores and a total of one terabyte of RAM, cluster@wu is well equipped to tackle challenging problems from various research areas. The high performance computing cluster consists of four parts: the cluster running the applications, which itself consists of 44 nodes, a login server, a file server and storage from a storage area network (SAN). The 44 nodes, each offering 12 cores for a total of 528 cores capable of processing jobs, are combined in the queue node.q. Table 1 provides a brief overview of the specs of each individual node. 1

node.q 44 nodes 2 Intel x5670 (6 Cores) @ 2.93 GHz 24 GB RAM Table 1: cluster@wu specification The file server (clusterfs.wu.ac.at) hosts the userdata, application data and the scheduling system Sun Grid Engine. This grid engine is responsible for job administration and supports the submission of serial tasks as well as parallel tasks. The login server (cluster.wu.ac.at) is the main entry point for application developers and cluster users. This server solely handles user authentication and execution of programs. This machine provides the cluster users with a platform for managing their computational intensive jobs. 2 Cluster Access To get access to cluster@wu a local unix account is needed. To acquire such an account send an email to firm-sys@wu.ac.at. You will be notified by email once your account is created. 2.1 Login To login to the cluster type ssh username@cluster.wu.ac.at into your unix shell (a number of ssh clients eg putty are available for Windows users). After authentication with your username and password a shell on the login server will become available. The user is provided with programs for editing and compiling as well as tools for managing jobs for the grid engine. The login server solely serves as an access point to the main cluster and therefore it should only be used for editing programs (eg installing R packages into your personal library), compiling small applications and managing jobs submitted to the grid engine. 2.2 Changing the password In order to change your password simply use the command passwd after logging into cluster@wu and then enter your password into the terminal. 2

3 Using the Cluster In this section a summary of the capabilities of the Sun Grid Engine (SGE) is presented as well as an overview of how to use this software. Sun Grid Engine is an open source cluster resource management and scheduling software. On cluster@wu version 6.0 of the Grid Engine manages the remote execution of cluster jobs. It can be obtained from http://gridengine.sunsource.net/, the commercial version Sun N1 Grid Engine can be found on http://www.sun.com/software/ gridware/. 3.1 Definitions Before going into further details the user should be aware of certain often used terms: Nodes In terms of the SGE a node is referred to as one core in the cluster. We refer to a node as one rack unit. Each of the cluster@wu nodes has 12 cores of which each can process one job simultaneously. This can sometimes cause confusion between different texts dealing with clusters. Jobs are user requests for resources available in a grid. These are then evaluated by the SGE and distributed to the nodes for processing. Task are smaller parts of a job that can be separately processed on different nodes or cores. A single job can consist of many tasks (even thousands). Each of these tasks can be performing the similar or completely different calculations depending on the arguments passed to the SGE. Job Arguments Each job can be submitted with extra parameters which affect how the job gets processed. These arguments can be specified in the job submission file. 3.2 Fair Use In order to provide maximum flexibility, some aspects of the grid engine are set up with very few restriction. For expample there is no limit on how many jobs a user can start or have in the queue. However this also means that users need to write and submit their jobs in a way, which does not adversely affect the rights of other users to run jobs on the cluster. 3

For example, if you wish to start a job, which contains hundreds or even a few thousand tasks, which will occupy a significant amount of resources for extended periods of time, then submit the job with a reduced priority (see section on Arguments below) so other users jobs can get processed whenever any task is completed It is not allowed to start programs which are time or resource intensive on the login server (as this will have side effects to all users logged in). If such tasks are started anyway, they may be terminated by the administrators without notice. 3.3 How the Grid Engine Operates In general the grid engine has to match the available resources to the requests of the grid users. Therefore the grid engine is responsible for accepting jobs from the outside world delay a job until it can be run sending job from the holding area to an execution device (node) managing running jobs logging of jobs The User needs not to care about things like On which node should I run my tasks or How do I get the results of my computation back from one node to my home directory, etc. All this is handled via the grid engine. 3.4 Submission of a Job A simple example shows how one can submit a job on the WU cluster infrastructure: 1. Login to the cluster with your user (when using a terminal then using the following command) ssh yourusername@cluster.wu.ac.at 2. Create a text file on the cluster (we will call the file myjob) with the following content 4

#$ -N firstjob R-i --vanilla < myrprogram.r 3. Submit the job to the grid engine with the following command qsub myjob This will queue the job and run it immediately whenever a node is free. 3.5 Output For every submitted and processed job the system creates 2 files. Each filename consists of the name of the job, distinction if it contains the output or the errors ("e" and "o" respectively) and the job ID. The submitted myjob from the previous example would hence result in the following files being created: firstjob.oxxxxx : Starts with a prologue with some meta information of the job and then continues the actual output (only the standard output) and ends with an epilogue (contains runtime etc.) firstjob.exxxxx : Contains any errors encountered XXXXX refers to the job id. Note that the output is cached while the job is running and will be displayed in the 2 files with a delay (or immediately if the job finishes). 3.6 Deletion of a Job To delete a job use the qdel command. 3.7 Monitoring Job/cluster status Statistics of the jobs running on cluster@wu can be obtained via calling qstat. An overview of all jobs/nodes and their utilization can be obtained using the sjs and sns commands. 5

3.8 Summary of Grid Engine Arguments A job begins typically with commands to the grid engine. These commands start with #$ followed by the argument as described below and then the desired value. One or more arguments can be passed to the grid engine: -N defines the actual jobname that will be displayed in the queue and on the output and error files -m bea this tells the system to send an email when the job starts (b) / ends (e) / is aborted (a) -M followed by the email address -q selects one of currently two node queues (either bignode.q or node.q). Default is node.q -pe [type] [n] starts up a parallel environment [type] reserving [n] cores. 3.9 Job best practices & what to avoid While it is possible to run jobs on nodes which then spawn further processes (see Fair Use section above for an explanation), please refrain from doing so if you didn t reserve the appropriate amount of cores on a node you will be using (such as submitting a job in a parallel environment, see example below). Otherwise the Grid Engine might try to allocate further jobs to a particular node even though it is already is running at maximum capacity. It might also be tempting to have your jobs write data back into your home directory (which are remotely mounted on each node when needed) as the job gets processed. This isn t an issue if done in a limited fashion but if done excessively with hundreds of simultaneous jobs can cause the fileserver to become unresponsive. This then results in both jobs being unable to pass the data they need to the fileserver as well as users being unable to access their own home directories upon logging in to the cluster (a typical symptom would be a user who normally uses ssh-key based authentification being asked to input their password, since due to the unresponsiveness of the fileserver, their public key cannot get accessed). Instead make use of the local storage in the /tmp/ directory of each node (approx 10-15 GB) as well as the /scratch/ directory which is mounted on each node and allows for cross-node access to data and have your scripts write into these directories instead of your home directory. Once you have larger chunks of data ready, then you can have them get copied to your home directory. 6

3.10 Troubleshooting & further details Troubleshooting issues and further details are outside the scope of this manual. Please refer to the website at http://statmath.wu.ac.at/cluster for more Information. If that is also insufficient to resolve your issues then you are welcome to contact the cluster admins at statmath-it@wu.ac.at 4 Job Examples 4.1 A Simple Job What follows is a simple job without any parameters to the SGE. The shell commands date is run then after a pause of 30 seconds this command is run again. # print date and time date # sleep for 30 seconds sleep 30 # print date and time again date 4.2 Compilation of Applications on the Cluster The following job starts remote compilation on cluster@wu. The arguments to the grid engine define the email address of the user to whom a mail should be sent. The flag -m e causes that the email is sent at the end of the job. #$ -N compile-application #$ -M userl@example.org #$ -m e #$ -q node.q cd /path/to/src echo "#### clean ####" make clean echo "#### configure ####" 7

./configure CC=icc CXX=icpc FC=f77 --prefix=/path/to/local/lib/ echo "#### make ####" make all echo "#### install ####" make install echo "#### finished ####" 4.3 Same Job with Different Parameters This is a common used possibility to (pseudo) parallelize tasks. For one job different tasks are executed. The key is an environment variable called SGE TASK ID. For a range of task numbers provided by the -t argument a task is started running the given job having access to a unique environment variable which identifies this task. To illustrate this way of job creation see the following task: #$ -N R_alternatives -t 1:10 R-i --vanilla <<-EOF run <- as.integer(sys.getenv("sge_task_id")) param <- expand.grid(mu=c(0.01, 0.025, 0.05, 0.075, 0.1), sd = c(0.04, 0.1)) param vec <- rnorm(50, param[[run,1]], param[[run,2]]) mean(vec) sd(vec) EOF For each task ID a vector with 50 normally distributed pseudo random numbers is generated. The parameters for the normal distribution are chosen using the SGE TASK ID environment variable. 4.4 openmpi/parallel Job The grid engine helps the user with setting up parallel environments. The -pe argument followed by the desired parallel environment (e.g., orte, PVM) informs the grid engine to start the specified environment. 8

#$ -N RMPI #$ -pe orte 20 #$ -q node.q # Job for using the MPI implementation LAM on 20 Nodes mpirun -np 20 /path/to/lam/executable 4.5 PVM Job #$ -N pvm-example #$ -pe pvm 5 #$ -q node.q /path/to/pvm/executable 5 Available Software In this section a summary of available programs is given. system is Debian GNU Linux (http://www.debian.org). The operating 9

R R-i R compiled with Intel compiler and linked against libgoto R-g R compiled with the default settings Compiler gcc GNU C Compiler, Stallman and the GCC Developer Community (2007) g++ GNU C++ Compiler, gfortran GNU FORTRAN Compiler icc Intel C Compiler, Intel corporation (2007a) icpc Intel C++ Compiler, Intel corporation (2007a) ifort Intel FORTRANCompiler, Intel corporation (2007b) Editor emacs vi/vim nano joe Scientific octave HPC LAM/MPI version 7.1.3 this is for running MPI programs PVM Parallel Virtual Machine version 3.4 Table 2: Available software A.bashrc Modifications In this appendix parts of the.bashrc are explained which enables specific functionality on the cluster. Keep in mind that jobs need to be modified to specifically include your.bashrc by adding #!/bin/sh to the beginning of the file which contains your job information and instructions. A.1 Enable openmpi To get the MPI wrappers and libraries add the following to your.bashrc: export LD_LIBRARY_PATH=/opt/libs/openmpi-1.4.3-INTEL-12.0-64/lib:$LD_LIBRARY_PATH export PATH=$PATH:/opt/libs/openmpi-1.4.3-INTEL-12.0-64/bin 10

A.2 Enable PVM To enable PVM add the following to your.bashrc: # PVM> # you may wish to use this for your own programs (edit the last # part to point to a different directory f.e. ~/bin/_$pvm_arch. # if [ -z $PVM_ROOT ]; then if [ -d /usr/lib/pvm3 ]; then export PVM_ROOT=/usr/lib/pvm3 else echo "Warning - PVM_ROOT not defined" echo "To use PVM, define PVM_ROOT and rerun your.bashrc" fi fi if [ -n $PVM_ROOT ]; then export PVM_ARCH= $PVM_ROOT/lib/pvmgetarch # # uncomment one of the following lines if you want the PVM commands # directory to be added to your shell path. # # export PATH=$PATH:$PVM_ROOT/lib # generic export PATH=$PATH:$PVM_ROOT/lib/$PVM_ARCH # arch-specific # # uncomment the following line if you want the PVM executable directory # to be added to your shell path. # export PATH=$PATH:$PVM_ROOT/bin/$PVM_ARCH fi A.3 Local R Library Only the administrator has write access to the site library and not all packages that users may require are pre-installed. For this reason one should create their own package library in their home directory. Since the home directories are exported to all nodes during execution of jobs the personal package library will also be available as well. To do this create a directory in your home folder and add the following to your.bashrc file: # R package directory export R_LIBS=~/path/to/R/lib If upon your next login to the cluster you start R (see the Section Available Software above) and your newly created folder gets displayed as primary library then the.bashrc modification worked. 11

References Richard Stallman and the GCC Developer Community. Using the GNU Compiler Collection. The Free Software Foundation, 2007. URL http://gcc.gnu.org Intel Corporation Intel C++ Compiler Documentation. Intel Corporation 2007a. URL www.intel.com Intel Corporation Intel C++ Compiler Documentation. Intel Corporation 2007b. URL www.intel.com 12