Environment Setup, Compilation and Batchjob Systems

Size: px
Start display at page:

Download "Environment Setup, Compilation and Batchjob Systems"

Transcription

1 MPI p.1/16 Environment Setup, Compilation and Batchjob Systems Introduction to CSC s services Thomas Zwinger thomas.zwinger[at]csc.fi Computational Environment & Application CSC IT Center for Science Ltd. Espoo, Finland

2 MPI p.2/16 Contents CSC s server HP CP4000 BL ProLiant Cray XT4/5 Setting up the system Compilation Batch job systems LSF-HPC, parallel batch job MPI PBS Where to get help

3 MPI p.3/16 HP CP4000 BL ProLiant murska.csc.fi Linux cluster: 4 login-, 544 compute nodes GHz AMD Dual-Core Opteron processors (= 2048 cores) Main memory: 4 TB nodes with 1,2 and 4 GB memory/core Disk capacity: 98 TB of local fast SFS/Lustre disk Communication: InfiniBand (288 port cabinet and 24 port enclosure)

4 MPI p.4/16 Cray XT4/XT5 louhi.csc.fi Linux cluster with enhanced communication hardware (SeaStar2) current configuration: 11 XT 4 cabinets (4048 compute cores) extension fall 2008: + 7 XT 5 cabinets (5376 compute cores = 9424 cores in total) Main memory: 10.3 TB Disk capacity: currently 70 TB

5 MPI p.5/16 Setting up the system load environment with command module checking available modules: louhi-login8 > module avail

6 MPI p.5/16 Setting up the system load environment with command module checking available modules: louhi-login8 > module avail checking currently loaded modules: louhi-login8 > module list Currently Loaded Modulefiles: 1) modules/ ) pgi/ ) pbs/ ) totalview-support/ ) MySQL/ ) xt-totalview/8.4.1b 4) xt-service/2.1.27hd 14) fftw/ ) xt-libc/2.1.27hd 15) xt-libsci/ ) xt-os/2.1.27hd 16) xt-mpt/ ) xt-boot/2.1.27hd 17) xt-pe/2.1.27hd 8) xt-lustre-ss/2.1.27hd ) xt-asyncpe/1.0c 9) xtpe-target-cnl 19) PrgEnv-pgi/2.1.27HD 10) Base-opts/2.1.27HD 20) xtpe-quadcore

7 MPI p.6/16 Setting up on murska/louhi: module switching modules (e.g. from PGI to GNU suite: louhi-login8 > module switch PrgEnv-pgi PrgEnv-gnu

8 MPI p.6/16 Setting up on murska/louhi: module switching modules (e.g. from PGI to GNU suite: louhi-login8 > module switch PrgEnv-pgi PrgEnv-gnu unloading modules: louhi-login8 > module unload PrgEnv-gnu

9 MPI p.6/16 Setting up on murska/louhi: module switching modules (e.g. from PGI to GNU suite: louhi-login8 > module switch PrgEnv-pgi PrgEnv-gnu unloading modules: louhi-login8 > module unload PrgEnv-gnu loading modules: louhi-login8 > module load PrgEnv-pgi

10 MPI p.6/16 Setting up on murska/louhi: module switching modules (e.g. from PGI to GNU suite: louhi-login8 > module switch PrgEnv-pgi PrgEnv-gnu unloading modules: louhi-login8 > module unload PrgEnv-gnu loading modules: louhi-login8 > module load PrgEnv-pgi checking options: louhi-login8 > module help modulename

11 MPI p.7/16 Compilation several compiler suites available (louhi and murska)

12 MPI p.7/16 Compilation several compiler suites available (louhi and murska) default: Portland Group Inc. (PGI) PrgEnv-pgi

13 MPI p.7/16 Compilation several compiler suites available (louhi and murska) default: Portland Group Inc. (PGI) PrgEnv-pgi alternatives:

14 MPI p.7/16 Compilation several compiler suites available (louhi and murska) default: Portland Group Inc. (PGI) PrgEnv-pgi alternatives: PathScale PrgEnv-pathscale

15 MPI p.7/16 Compilation several compiler suites available (louhi and murska) default: Portland Group Inc. (PGI) PrgEnv-pgi alternatives: PathScale PrgEnv-pathscale GNU (open source) PrgEnv-gnu

16 MPI p.7/16 Compilation several compiler suites available (louhi and murska) default: Portland Group Inc. (PGI) PrgEnv-pgi alternatives: PathScale PrgEnv-pathscale GNU (open source) PrgEnv-gnu switch them with module comand: >module switch PrgEnv-pgi PrgEnv-gnu (for default revision)

17 MPI p.7/16 Compilation several compiler suites available (louhi and murska) default: Portland Group Inc. (PGI) PrgEnv-pgi alternatives: PathScale PrgEnv-pathscale GNU (open source) PrgEnv-gnu switch them with module comand: >module switch PrgEnv-pgi PrgEnv-gnu (for default revision) or explicitely the revision: PrgEnv-gnu/3.4.6 PrgEnv-gnu/4.1.2 PrgEnv-gnu/4.2.4

18 MPI p.8/16 Compilation ctnd. Common option switches: -c Compiles only, produces unlinked object filename.o -o filename assigns the name filename to the executable. Default: a.out -g Produces symbolic debug information -Idirname Searches directory dirname for include files or module files -Ldirname Searches directory dirname for for library files specified by -l -llibname -Olevel Searches the specified library file with the name liblibname.a Specifies whether to optimize or not and at which level

19 MPI p.9/16 On louhi: Compilation ctnd. all compiler suites have the same wrapper calls Do not use the suite-specific calls (gcc, pathf90,...) Language Compiler File suffix Fortran 90/95 ftn.f90,.f95,.f,.f90,.f95,.f Fortran 77 f77.f,.f C cc.c,.i C ++ CC.C,.cc,.ii

20 MPI p.9/16 On louhi: Compilation ctnd. all compiler suites have the same wrapper calls Do not use the suite-specific calls (gcc, pathf90,...) Language Compiler File suffix Fortran 90/95 ftn.f90,.f95,.f,.f90,.f95,.f Fortran 77 f77.f,.f C cc.c,.i C ++ CC.C,.cc,.ii Do not use ld for separate linking, use the compiler wrappers

21 MPI p.9/16 On louhi: Compilation ctnd. all compiler suites have the same wrapper calls Do not use the suite-specific calls (gcc, pathf90,...) Language Compiler File suffix Fortran 90/95 ftn.f90,.f95,.f,.f90,.f95,.f Fortran 77 f77.f,.f C cc.c,.i C ++ CC.C,.cc,.ii Do not use ld for separate linking, use the compiler wrappers The CLN OS only supports statically linked objects

22 MPI p.9/16 Compilation ctnd. On louhi: all compiler suites have the same wrapper calls Do not use the suite-specific calls (gcc, pathf90,...) Language Compiler File suffix Fortran 90/95 ftn.f90,.f95,.f,.f90,.f95,.f Fortran 77 f77.f,.f C cc.c,.i C ++ CC.C,.cc,.ii Do not use ld for separate linking, use the compiler wrappers The CLN OS only supports statically linked objects example C program: louhi-login8 >cc hello.c -o hello /opt/xt-asyncpe/1.0c/bin/cc: INFO: linux target is being used 0.272u 4.312s 0: % 0+0k 0+0io 1pf+0w

23 MPI p.10/16 On murska: Compilation ctnd. for MPI compilations all compiler suites have the same wrapper calls Language Compiler File suffix Fortran 90/95 mpif90.f90,.f95,.f,.f90,.f95,.f Fortran 77 mpif77.f,.f C mpicc.c,.i C ++ mpicc.c,.cc,.ii

24 MPI p.10/16 On murska: Compilation ctnd. for MPI compilations all compiler suites have the same wrapper calls Language Compiler File suffix Fortran 90/95 mpif90.f90,.f95,.f,.f90,.f95,.f Fortran 77 mpif77.f,.f C mpicc.c,.i C ++ mpicc.c,.cc,.ii Use the wrapper compiler commands for separate linking

25 MPI p.10/16 Compilation ctnd. On murska: for MPI compilations all compiler suites have the same wrapper calls Language Compiler File suffix Fortran 90/95 mpif90.f90,.f95,.f,.f90,.f95,.f Fortran 77 mpif77.f,.f C mpicc.c,.i C ++ mpicc.c,.cc,.ii Use the wrapper compiler commands for separate linking static as well as shared linking is possible on murska

26 MPI p.11/16 Batch job system Why?: Because demand usually exceeds resources

27 MPI p.11/16 Batch job system Why?: Because demand usually exceeds resources How?: Job management systems...

28 MPI p.11/16 Batch job system Why?: Because demand usually exceeds resources How?: Job management systems... provide different queues for different types of jobs (parallel, serial,... ) attempt to keep load on machine as high as possible try to schedule submitted jobs by evaluating the needed resources

29 MPI p.11/16 Batch job system Why?: Because demand usually exceeds resources How?: Job management systems... provide different queues for different types of jobs (parallel, serial,... ) attempt to keep load on machine as high as possible try to schedule submitted jobs by evaluating the needed resources user should provide as much information as possible (memory, wall clock time,...)

30 MPI p.12/16 LSF-HPC, parallel batch job MPI #!/bin/csh

31 MPI p.12/16 LSF-HPC, parallel batch job MPI #!/bin/csh #BSUB -L /bin/csh execution shell environment

32 MPI p.12/16 LSF-HPC, parallel batch job MPI #!/bin/csh #BSUB -L /bin/csh #BSUB -J anotherjobname execution shell environment name of your job

33 MPI p.12/16 LSF-HPC, parallel batch job MPI #!/bin/csh #BSUB -L /bin/csh #BSUB -J anotherjobname #BSUB -e parmpi err.%j execution shell environment name of your job system error message output file

34 MPI p.12/16 LSF-HPC, parallel batch job MPI #!/bin/csh #BSUB -L /bin/csh #BSUB -J anotherjobname #BSUB -e parmpi err.%j #BSUB -o parmpi output.%j execution shell environment name of your job system error message output file system message output file

35 MPI p.12/16 LSF-HPC, parallel batch job MPI #!/bin/csh #BSUB -L /bin/csh #BSUB -J anotherjobname #BSUB -e parmpi err.%j #BSUB -o parmpi output.%j #BSUB -M 1024 execution shell environment name of your job system error message output file system message output file per-process (soft) memory limit in KB

36 MPI p.12/16 LSF-HPC, parallel batch job MPI #!/bin/csh #BSUB -L /bin/csh #BSUB -J anotherjobname #BSUB -e parmpi err.%j #BSUB -o parmpi output.%j #BSUB -M 1024 #BSUB -W 00:02 execution shell environment name of your job system error message output file system message output file per-process (soft) memory limit in KB wall clock time in hh:mm

37 MPI p.12/16 LSF-HPC, parallel batch job MPI #!/bin/csh #BSUB -L /bin/csh #BSUB -J anotherjobname #BSUB -e parmpi err.%j #BSUB -o parmpi output.%j #BSUB -M 1024 #BSUB -W 00:02 #BSUB -n 2 execution shell environment name of your job system error message output file system message output file per-process (soft) memory limit in KB wall clock time in hh:mm number of MPI processes

38 MPI p.12/16 LSF-HPC, parallel batch job MPI #!/bin/csh #BSUB -L /bin/csh #BSUB -J anotherjobname #BSUB -e parmpi err.%j #BSUB -o parmpi output.%j #BSUB -M 1024 #BSUB -W 00:02 #BSUB -n 2 mpirun -srun./hello mpi execution shell environment name of your job system error message output file system message output file per-process (soft) memory limit in KB wall clock time in hh:mm number of MPI processes run the parallel executable

39 MPI p.13/16 LSF-HPC - submitting, monitoring submitting: yourid@murska:/wrk/yourid> bsub < run lsf.sh

40 MPI p.13/16 LSF-HPC - submitting, monitoring submitting: yourid@murska:/wrk/yourid> bsub < run lsf.sh the < is essential!

41 MPI p.13/16 LSF-HPC - submitting, monitoring submitting: yourid@murska:/wrk/yourid> bsub < run lsf.sh the < is essential! or instead interactively: bsub -M W 00:02 -n 2 -Ip mpirun -srun./hello mpi

42 MPI p.13/16 LSF-HPC - submitting, monitoring submitting: yourid@murska:/wrk/yourid> bsub < run lsf.sh the < is essential! or instead interactively: bsub -M W 00:02 -n 2 -Ip mpirun -srun./hello mpi monitoring: bjobs -u yourid

43 MPI p.13/16 LSF-HPC - submitting, monitoring submitting: yourid@murska:/wrk/yourid> bsub < run lsf.sh the < is essential! or instead interactively: bsub -M W 00:02 -n 2 -Ip mpirun -srun./hello mpi monitoring: bjobs -u yourid kill job: bkill job-id

44 MPI p.14/16 PBS Portable Batch System #!/bin/sh

45 MPI p.14/16 PBS Portable Batch System #!/bin/sh #PBS -N jobname name of your job

46 MPI p.14/16 PBS Portable Batch System #!/bin/sh #PBS -N jobname #PBS -j oe name of your job system error message output file

47 MPI p.14/16 PBS Portable Batch System #!/bin/sh #PBS -N jobname #PBS -j oe #PBS -l walltime=00:01:00 name of your job system error message output file wall clock time in hh:mm:ss

48 MPI p.14/16 PBS Portable Batch System #!/bin/sh #PBS -N jobname #PBS -j oe #PBS -l walltime=00:01:00 #PBS -l mppwidth=32 name of your job system error message output file wall clock time in hh:mm:ss number allocated cores

49 MPI p.14/16 PBS Portable Batch System #!/bin/sh #PBS -N jobname #PBS -j oe #PBS -l walltime=00:01:00 #PBS -l mppwidth=32 #PBS -l mppmem=20m name of your job system error message output file wall clock time in hh:mm:ss number allocated cores number allocated memory (in connection with -m option

50 MPI p.14/16 PBS Portable Batch System #!/bin/sh #PBS -N jobname #PBS -j oe #PBS -l walltime=00:01:00 #PBS -l mppwidth=32 #PBS -l mppmem=20m cd $PBS O WORKDIR name of your job system error message output file wall clock time in hh:mm:ss number allocated cores number allocated memory (in connection with -m option change to directory

51 MPI p.14/16 PBS Portable Batch System #!/bin/sh #PBS -N jobname #PBS -j oe #PBS -l walltime=00:01:00 #PBS -l mppwidth=32 #PBS -l mppmem=20m cd $PBS O WORKDIR aprun -n 32 -m 20M./hello name of your job system error message output file wall clock time in hh:mm:ss number allocated cores number allocated memory (in connection with -m option change to directory run the executable

52 MPI p.15/16 PBS - submitting, monitoring submit with: qsub job.sh

53 MPI p.15/16 PBS - submitting, monitoring submit with: qsub job.sh check status: qstat

54 MPI p.15/16 PBS - submitting, monitoring submit with: qsub job.sh check status: qstat delete job: qdel job-id

55 MPI p.15/16 PBS - submitting, monitoring submit with: qsub job.sh check status: qstat delete job: qdel job-id output in file: louhi-login8 /wrk/yourid/helloworld> less test.o64889 Warning: no access to tty (Bad file descriptor). Thus no job control in this shell. Hello world from process 1 of 32 Hello world from process 2 of 32 Hello world from process 3 of 32 Hello world from process 0 of 32.

56 MPI p.16/16 Where to get more information User Guides louhi: murska:

57 MPI p.16/16 Where to get more information User Guides louhi: murska: Software and databases

58 MPI p.16/16 Where to get more information User Guides louhi: murska: Software and databases data storage

59 MPI p.16/16 Where to get more information User Guides louhi: murska: Software and databases data storage CSC Helpdesk (Mo-Fri ): +358-(0) helpdesk[at]csc.fi

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Miami University RedHawk Cluster Working with batch jobs on the Cluster Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

Running applications on the Cray XC30 4/12/2015

Running applications on the Cray XC30 4/12/2015 Running applications on the Cray XC30 4/12/2015 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch jobs on compute nodes

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

The CNMS Computer Cluster

The CNMS Computer Cluster The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

Introduction to the CRAY XE6(Lindgren) environment at PDC. Dr. Lilit Axner (PDC, Sweden)

Introduction to the CRAY XE6(Lindgren) environment at PDC. Dr. Lilit Axner (PDC, Sweden) Introduction to the CRAY XE6(Lindgren) environment at PDC Dr. Lilit Axner (PDC, Sweden) Lindgren System used after the summer school! Cray XE6 8 interactive nodes 1516 dedicated nodes (queue needed!) 24

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

Agenda. Using HPC Wales 2

Agenda. Using HPC Wales 2 Using HPC Wales Agenda Infrastructure : An Overview of our Infrastructure Logging in : Command Line Interface and File Transfer Linux Basics : Commands and Text Editors Using Modules : Managing Software

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

High Performance Computing at the Oak Ridge Leadership Computing Facility

High Performance Computing at the Oak Ridge Leadership Computing Facility Page 1 High Performance Computing at the Oak Ridge Leadership Computing Facility Page 2 Outline Our Mission Computer Systems: Present, Past, Future Challenges Along the Way Resources for Users Page 3 Our

More information

Introduction to Sun Grid Engine (SGE)

Introduction to Sun Grid Engine (SGE) Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems

More information

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina High Performance Computing Facility Specifications, Policies and Usage Supercomputer Project Bibliotheca Alexandrina Bibliotheca Alexandrina 1/16 Topics Specifications Overview Site Policies Intel Compilers

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF)

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) ALCF Resources: Machines & Storage Mira (Production) IBM Blue Gene/Q 49,152 nodes / 786,432 cores 768 TB of memory Peak flop rate:

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

Quick Tutorial for Portable Batch System (PBS)

Quick Tutorial for Portable Batch System (PBS) Quick Tutorial for Portable Batch System (PBS) The Portable Batch System (PBS) system is designed to manage the distribution of batch jobs and interactive sessions across the available nodes in the cluster.

More information

An introduction to Fyrkat

An introduction to Fyrkat Cluster Computing May 25, 2011 How to get an account https://fyrkat.grid.aau.dk/useraccount How to get help https://fyrkat.grid.aau.dk/wiki What is a Cluster Anyway It is NOT something that does any of

More information

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew

More information

Job scheduler details

Job scheduler details Job scheduler details Advanced Computing Center for Research & Education (ACCRE) Job scheduler details 1 / 25 Outline 1 Batch queue system overview 2 Torque and Moab 3 Submitting jobs (ACCRE) Job scheduler

More information

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry New High-performance computing cluster: PAULI Sascha Frick Institute for Physical Chemistry 02/05/2012 Sascha Frick (PHC) HPC cluster pauli 02/05/2012 1 / 24 Outline 1 About this seminar 2 New Hardware

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Dec 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

Grid Engine Users Guide. 2011.11p1 Edition

Grid Engine Users Guide. 2011.11p1 Edition Grid Engine Users Guide 2011.11p1 Edition Grid Engine Users Guide : 2011.11p1 Edition Published Nov 01 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the

More information

How to Run Parallel Jobs Efficiently

How to Run Parallel Jobs Efficiently How to Run Parallel Jobs Efficiently Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education May 9, 2013 1 The big picture: running parallel jobs on Hoffman2

More information

UMass High Performance Computing Center

UMass High Performance Computing Center .. UMass High Performance Computing Center University of Massachusetts Medical School October, 2014 2 / 32. Challenges of Genomic Data It is getting easier and cheaper to produce bigger genomic data every

More information

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz)

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team (support@nesi.org.nz) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

Systems, Storage and Software in the National Supercomputing Service. CSCS User Assembly, Luzern, 26 th March 2010 Neil Stringfellow

Systems, Storage and Software in the National Supercomputing Service. CSCS User Assembly, Luzern, 26 th March 2010 Neil Stringfellow Systems, Storage and Software in the National Supercomputing Service CSCS User Assembly, Luzern, 26 th March 2010 Neil Stringfellow Cray XT5 Monte Rosa 22,168 processors 1844 twelve-way nodes 2 AMD 2.4

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

Parallel Debugging with DDT

Parallel Debugging with DDT Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece

More information

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz)

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz) Center for e-research (eresearch@nesi.org.nz) Outline 1 About Us About CER and NeSI The CS Team Our Facilities 2 Key Concepts What is a Cluster Parallel Programming Shared Memory Distributed Memory 3 Using

More information

How To Run A Tompouce Cluster On An Ipra (Inria) 2.5.5 (Sun) 2 (Sun Geserade) 2-5.4 (Sun-Ge) 2/5.2 (

How To Run A Tompouce Cluster On An Ipra (Inria) 2.5.5 (Sun) 2 (Sun Geserade) 2-5.4 (Sun-Ge) 2/5.2 ( Running Hadoop and Stratosphere jobs on TomPouce cluster 16 October 2013 TomPouce cluster TomPouce is a cluster of 20 calcula@on nodes = 240 cores Located in the Inria Turing building (École Polytechnique)

More information

On-demand (Pay-per-Use) HPC Service Portal

On-demand (Pay-per-Use) HPC Service Portal On-demand (Pay-per-Use) Portal Wang Junhong INTRODUCTION High Performance Computing, Computer Centre The Service Portal is a key component of the On-demand (pay-per-use) HPC service delivery. The Portal,

More information

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing

More information

A Crash course to (The) Bighouse

A Crash course to (The) Bighouse A Crash course to (The) Bighouse Brock Palen brockp@umich.edu SVTI Users meeting Sep 20th Outline 1 Resources Configuration Hardware 2 Architecture ccnuma Altix 4700 Brick 3 Software Packaged Software

More information

24/08/2004. Introductory User Guide

24/08/2004. Introductory User Guide 24/08/2004 Introductory User Guide CSAR Introductory User Guide Introduction This material is designed to provide new users with all the information they need to access and use the SGI systems provided

More information

NYUAD HPC Center Running Jobs

NYUAD HPC Center Running Jobs NYUAD HPC Center Running Jobs 1 Overview... Error! Bookmark not defined. 1.1 General List... Error! Bookmark not defined. 1.2 Compilers... Error! Bookmark not defined. 2 Loading Software... Error! Bookmark

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35 HPCC USER S GUIDE Version 1.2 July 2012 IITS (Research Support) Singapore Management University IITS, Singapore Management University Page 1 of 35 Revision History Version 1.0 (27 June 2012): - Modified

More information

Caltech Center for Advanced Computing Research System Guide: MRI2 Cluster (zwicky) January 2014

Caltech Center for Advanced Computing Research System Guide: MRI2 Cluster (zwicky) January 2014 1. How to Get An Account CACR Accounts 2. How to Access the Machine Connect to the front end, zwicky.cacr.caltech.edu: ssh -l username zwicky.cacr.caltech.edu or ssh username@zwicky.cacr.caltech.edu Edits,

More information

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Pierre-Yves Taunay Research Computing and Cyberinfrastructure 224A Computer Building The Pennsylvania State University University

More information

NorduGrid ARC Tutorial

NorduGrid ARC Tutorial NorduGrid ARC Tutorial / Arto Teräs and Olli Tourunen 2006-03-23 Slide 1(34) NorduGrid ARC Tutorial Arto Teräs and Olli Tourunen CSC, Espoo, Finland March 23

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

Using Parallel Computing to Run Multiple Jobs

Using Parallel Computing to Run Multiple Jobs Beowulf Training Using Parallel Computing to Run Multiple Jobs Jeff Linderoth August 5, 2003 August 5, 2003 Beowulf Training Running Multiple Jobs Slide 1 Outline Introduction to Scheduling Software The

More information

CNAG User s Guide. Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, 2015. 1 Introduction 2

CNAG User s Guide. Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, 2015. 1 Introduction 2 CNAG User s Guide Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, 2015 Contents 1 Introduction 2 2 System Overview 2 3 Connecting to CNAG cluster 2 3.1 Transferring files...................................

More information

1 Bull, 2011 Bull Extreme Computing

1 Bull, 2011 Bull Extreme Computing 1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance

More information

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu Grid 101 Josh Hegie grid@unr.edu http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging

More information

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA

More information

Recommended hardware system configurations for ANSYS users

Recommended hardware system configurations for ANSYS users Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range

More information

Clusters: Mainstream Technology for CAE

Clusters: Mainstream Technology for CAE Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux

More information

MPI / ClusterTools Update and Plans

MPI / ClusterTools Update and Plans HPC Technical Training Seminar July 7, 2008 October 26, 2007 2 nd HLRS Parallel Tools Workshop Sun HPC ClusterTools 7+: A Binary Distribution of Open MPI MPI / ClusterTools Update and Plans Len Wisniewski

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

RWTH GPU Cluster. Sandra Wienke wienke@rz.rwth-aachen.de November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

RWTH GPU Cluster. Sandra Wienke wienke@rz.rwth-aachen.de November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky RWTH GPU Cluster Fotos: Christian Iwainsky Sandra Wienke wienke@rz.rwth-aachen.de November 2012 Rechen- und Kommunikationszentrum (RZ) The RWTH GPU Cluster GPU Cluster: 57 Nvidia Quadro 6000 (Fermi) innovative

More information

OLCF Best Practices. Bill Renaud OLCF User Assistance Group

OLCF Best Practices. Bill Renaud OLCF User Assistance Group OLCF Best Practices Bill Renaud OLCF User Assistance Group Overview This presentation covers some helpful information for users of OLCF Staying informed Some aspects of system usage that may differ from

More information

Building Clusters for Gromacs and other HPC applications

Building Clusters for Gromacs and other HPC applications Building Clusters for Gromacs and other HPC applications Erik Lindahl lindahl@cbr.su.se CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network

More information

To connect to the cluster, simply use a SSH or SFTP client to connect to:

To connect to the cluster, simply use a SSH or SFTP client to connect to: RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, cluster-head.ce.rit.edu, serves as the master controller or

More information

High-Performance Reservoir Risk Assessment (Jacta Cluster)

High-Performance Reservoir Risk Assessment (Jacta Cluster) High-Performance Reservoir Risk Assessment (Jacta Cluster) SKUA-GOCAD 2013.1 Paradigm 2011.3 With Epos 4.1 Data Management Configuration Guide 2008 2013 Paradigm Ltd. or its affiliates and subsidiaries.

More information

OLCF Best Practices (and More) Bill Renaud OLCF User Assistance Group

OLCF Best Practices (and More) Bill Renaud OLCF User Assistance Group OLCF Best Practices (and More) Bill Renaud OLCF User Assistance Group Overview This presentation covers some helpful information for users of OLCF Staying informed Some aspects of system usage that may

More information

Introduction to the SGE/OGS batch-queuing system

Introduction to the SGE/OGS batch-queuing system Grid Computing Competence Center Introduction to the SGE/OGS batch-queuing system Riccardo Murri Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Oct. 6, 2011 The basic

More information

SGE Roll: Users Guide. Version @VERSION@ Edition

SGE Roll: Users Guide. Version @VERSION@ Edition SGE Roll: Users Guide Version @VERSION@ Edition SGE Roll: Users Guide : Version @VERSION@ Edition Published Aug 2006 Copyright 2006 UC Regents, Scalable Systems Table of Contents Preface...i 1. Requirements...1

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20. October 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 27 Course

More information

Paul s Norwegian Vacation (or Experiences with Cluster Computing ) Paul Sack 20 September, 2002. sack@stud.ntnu.no www.stud.ntnu.

Paul s Norwegian Vacation (or Experiences with Cluster Computing ) Paul Sack 20 September, 2002. sack@stud.ntnu.no www.stud.ntnu. Paul s Norwegian Vacation (or Experiences with Cluster Computing ) Paul Sack 20 September, 2002 sack@stud.ntnu.no www.stud.ntnu.no/ sack/ Outline Background information Work on clusters Profiling tools

More information

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu An introduction to compute resources in Biostatistics Chris Scheller schelcj@umich.edu 1. Resources 1. Hardware 2. Account Allocation 3. Storage 4. Software 2. Usage 1. Environment Modules 2. Tools 3.

More information

Advanced Techniques with Newton. Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011

Advanced Techniques with Newton. Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011 Advanced Techniques with Newton Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011 Workshop Goals Gain independence Executing your work Finding Information Fixing Problems Optimizing Effectiveness

More information

Cloud Computing through Virtualization and HPC technologies

Cloud Computing through Virtualization and HPC technologies Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC

More information

Heterogeneous Clustering- Operational and User Impacts

Heterogeneous Clustering- Operational and User Impacts Heterogeneous Clustering- Operational and User Impacts Sarita Salm Sterling Software MS 258-6 Moffett Field, CA 94035.1000 sarita@nas.nasa.gov http :llscience.nas.nasa.govl~sarita ABSTRACT Heterogeneous

More information

Introduction to Supercomputing with Janus

Introduction to Supercomputing with Janus Introduction to Supercomputing with Janus Shelley Knuth shelley.knuth@colorado.edu Peter Ruprecht peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Who is CU Research Computing? What is a supercomputer?

More information

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda High Performance Computing with Sun Grid Engine on the HPSCC cluster Fernando J. Pineda HPSCC High Performance Scientific Computing Center (HPSCC) " The Johns Hopkins Service Center in the Dept. of Biostatistics

More information

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study CS 377: Operating Systems Lecture 25 - Linux Case Study Guest Lecturer: Tim Wood Outline Linux History Design Principles System Overview Process Scheduling Memory Management File Systems A review of what

More information

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical

More information

Installing and running COMSOL on a Linux cluster

Installing and running COMSOL on a Linux cluster Installing and running COMSOL on a Linux cluster Introduction This quick guide explains how to install and operate COMSOL Multiphysics 5.0 on a Linux cluster. It is a complement to the COMSOL Installation

More information

Ra - Batch Scripts. Timothy H. Kaiser, Ph.D. tkaiser@mines.edu

Ra - Batch Scripts. Timothy H. Kaiser, Ph.D. tkaiser@mines.edu Ra - Batch Scripts Timothy H. Kaiser, Ph.D. tkaiser@mines.edu Jobs on Ra are Run via a Batch System Ra is a shared resource Purpose: Give fair access to all users Have control over where jobs are run Set

More information

How To Install Linux Titan

How To Install Linux Titan Linux Titan Distribution Presented By: Adham Helal Amgad Madkour Ayman El Sayed Emad Zakaria What Is a Linux Distribution? What is a Linux Distribution? The distribution contains groups of packages and

More information

HPC-Nutzer Informationsaustausch. The Workload Management System LSF

HPC-Nutzer Informationsaustausch. The Workload Management System LSF HPC-Nutzer Informationsaustausch The Workload Management System LSF Content Cluster facts Job submission esub messages Scheduling strategies Tools and security Future plans 2 von 10 Some facts about the

More information

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 What is ACENET? What is ACENET? Shared regional resource for... high-performance computing (HPC) remote collaboration

More information

Job Scheduling with Moab Cluster Suite

Job Scheduling with Moab Cluster Suite Job Scheduling with Moab Cluster Suite IBM High Performance Computing February 2010 Y. Joanna Wong, Ph.D. yjw@us.ibm.com 2/22/2010 Workload Manager Torque Source: Adaptive Computing 2 Some terminology..

More information

HPCC - Hrothgar Getting Started User Guide MPI Programming

HPCC - Hrothgar Getting Started User Guide MPI Programming HPCC - Hrothgar Getting Started User Guide MPI Programming High Performance Computing Center Texas Tech University HPCC - Hrothgar 2 Table of Contents 1. Introduction... 3 2. Setting up the environment...

More information

ABAQUS High Performance Computing Environment at Nokia

ABAQUS High Performance Computing Environment at Nokia ABAQUS High Performance Computing Environment at Nokia Juha M. Korpela Nokia Corporation Abstract: The new commodity high performance computing (HPC) hardware together with the recent ABAQUS performance

More information

MFCF Grad Session 2015

MFCF Grad Session 2015 MFCF Grad Session 2015 Agenda Introduction Help Centre and requests Dept. Grad reps Linux clusters using R with MPI Remote applications Future computing direction Technical question and answer period MFCF

More information

INF-110. GPFS Installation

INF-110. GPFS Installation INF-110 GPFS Installation Overview Plan the installation Before installing any software, it is important to plan the GPFS installation by choosing the hardware, deciding which kind of disk connectivity

More information

Job Scheduling on a Large UV 1000. Chad Vizino SGI User Group Conference May 2011. 2011 Pittsburgh Supercomputing Center

Job Scheduling on a Large UV 1000. Chad Vizino SGI User Group Conference May 2011. 2011 Pittsburgh Supercomputing Center Job Scheduling on a Large UV 1000 Chad Vizino SGI User Group Conference May 2011 Overview About PSC s UV 1000 Simon UV Distinctives UV Operational issues Conclusion PSC s UV 1000 - Blacklight Blacklight

More information

Cluster Computing With R

Cluster Computing With R Cluster Computing With R Stowers Institute for Medical Research R/Bioconductor Discussion Group Earl F. Glynn Scientific Programmer 18 December 2007 1 Cluster Computing With R Accessing Linux Boxes from

More information

RA MPI Compilers Debuggers Profiling. March 25, 2009

RA MPI Compilers Debuggers Profiling. March 25, 2009 RA MPI Compilers Debuggers Profiling March 25, 2009 Examples and Slides To download examples on RA 1. mkdir class 2. cd class 3. wget http://geco.mines.edu/workshop/class2/examples/examples.tgz 4. tar

More information

Windows HPC 2008 Cluster Launch

Windows HPC 2008 Cluster Launch Windows HPC 2008 Cluster Launch Regionales Rechenzentrum Erlangen (RRZE) Johannes Habich hpc@rrze.uni-erlangen.de Launch overview Small presentation and basic introduction Questions and answers Hands-On

More information

ERIKA Enterprise pre-built Virtual Machine

ERIKA Enterprise pre-built Virtual Machine ERIKA Enterprise pre-built Virtual Machine with support for Arduino, STM32, and others Version: 1.0 July 2, 2014 About Evidence S.r.l. Evidence is a company operating in the field of software for embedded

More information

Getting Started with HC Exchange Module

Getting Started with HC Exchange Module Getting Started with HC Exchange Module HOSTING CONTROLLER WWW.HOSTINGCONROLLER.COM HOSTING CONTROLLER Contents Introduction...1 Minimum System Requirements for Exchange 2013...1 Hardware Requirements...1

More information

8/15/2014. Best Practices @OLCF (and more) General Information. Staying Informed. Staying Informed. Staying Informed-System Status

8/15/2014. Best Practices @OLCF (and more) General Information. Staying Informed. Staying Informed. Staying Informed-System Status Best Practices @OLCF (and more) Bill Renaud OLCF User Support General Information This presentation covers some helpful information for users of OLCF Staying informed Aspects of system usage that may differ

More information

Enabling Technologies for Distributed and Cloud Computing

Enabling Technologies for Distributed and Cloud Computing Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading

More information

Installation Manual for Grid Monitoring Tool

Installation Manual for Grid Monitoring Tool Installation Manual for Grid Monitoring Tool Project No: CDAC/B/SSDG/GMT/2004/025 Document No: SSDG/GMT/2004/025/INST-MAN/2.1 Control Status: Controlled (Internal Circulation Only) Author : Karuna Distribution

More information