MPI on morar and ARCHER

Similar documents
Streamline Computing Linux Cluster User Training. ( Nottingham University)

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

Introduction to Sun Grid Engine (SGE)

Parallel Debugging with DDT

Grid 101. Grid 101. Josh Hegie.

Debugging and Profiling Lab. Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma

Running applications on the Cray XC30 4/12/2015

Optimization tools. 1) Improving Overall I/O

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

HPCC - Hrothgar Getting Started User Guide MPI Programming

An Introduction to High Performance Computing in the Department

The Asterope compute cluster

Cluster Computing With R

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

RA MPI Compilers Debuggers Profiling. March 25, 2009

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

To connect to the cluster, simply use a SSH or SFTP client to connect to:

The RWTH Compute Cluster Environment

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

NEC HPC-Linux-Cluster

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

Manual for using Super Computing Resources

Grid Engine Users Guide p1 Edition

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Installing and running COMSOL on a Linux cluster

Batch Scripts for RA & Mio

GRID Computing: CAS Style

User s Manual

OLCF Best Practices. Bill Renaud OLCF User Assistance Group

On-demand (Pay-per-Use) HPC Service Portal

Ra - Batch Scripts. Timothy H. Kaiser, Ph.D. tkaiser@mines.edu

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Beyond Windows: Using the Linux Servers and the Grid

Getting Started with HPC

SLURM Workload Manager

The SUN ONE Grid Engine BATCH SYSTEM

Introduction to MPI Programming!

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

SGE Roll: Users Guide. Version Edition

Using Parallel Computing to Run Multiple Jobs

WestGrid. Handbook for Researchers at the University of Manitoba. January 2010

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

Matlab on a Supercomputer

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Notes on the SNOW/Rmpi R packages with OpenMPI and Sun Grid Engine

MPI / ClusterTools Update and Plans

Debugging with TotalView

Technical Guide to ULGrid

Windows HPC 2008 Cluster Launch

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to grid technologies, parallel and cloud computing. Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber

An introduction to Fyrkat

Introduction to the CRAY XE6(Lindgren) environment at PDC. Dr. Lilit Axner (PDC, Sweden)

Juropa. Batch Usage Introduction. May 2014 Chrysovalantis Paschoulas

Quick Tutorial for Portable Batch System (PBS)

Developing Parallel Applications with the Eclipse Parallel Tools Platform

Hodor and Bran - Job Scheduling and PBS Scripts

How To Run A Tompouce Cluster On An Ipra (Inria) (Sun) 2 (Sun Geserade) (Sun-Ge) 2/5.2 (

Parallel and Distributed Computing Programming Assignment 1

Using the Windows Cluster

OpenMP & MPI CISC 879. Tristan Vanderbruggen & John Cavazos Dept of Computer & Information Sciences University of Delaware

LSKA 2010 Survey Report Job Scheduler

Grid Engine 6. Troubleshooting. BioTeam Inc.

Release Notes for Open Grid Scheduler/Grid Engine. Version: Grid Engine

How to use PDFlib products with PHP

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20.

High Performance Computing

OLCF Best Practices (and More) Bill Renaud OLCF User Assistance Group

Parallelization: Binary Tree Traversal

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

Running ANSYS Fluent Under SGE

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Lightning Introduction to MPI Programming

PBSWEB: A WEB-BASED INTERFACE TO THE PORTABLE BATCH SYSTEM

How to Run Parallel Jobs Efficiently

Living in a mixed world -Interoperability in Windows HPC Server Steven Newhouse stevenn@microsoft.com

ABAQUS High Performance Computing Environment at Nokia

The CNMS Computer Cluster

Oracle Tools and Bindings with languages

Integrating SNiFF+ with the Data Display Debugger (DDD)

Paul s Norwegian Vacation (or Experiences with Cluster Computing ) Paul Sack 20 September, sack@stud.ntnu.no

Advanced MPI. Hybrid programming, profiling and debugging of MPI applications. Hristo Iliev RZ. Rechen- und Kommunikationszentrum (RZ)

NorduGrid ARC Tutorial

Interoperability between Sun Grid Engine and the Windows Compute Cluster

Objectives. Chapter 2: Operating-System Structures. Operating System Services (Cont.) Operating System Services. Operating System Services (Cont.

Linux Cluster Computing An Administrator s Perspective

Integrating TAU With Eclipse: A Performance Analysis System in an Integrated Development Environment

Grid Engine. Application Integration

Hands-on exercise (FX10 pi): NPB-MZ-MPI / BT

Parallel Programming with MPI on the Odyssey Cluster

GC3: Grid Computing Competence Center Cluster computing, I Batch-queueing systems

CS 3530 Operating Systems. L02 OS Intro Part 1 Dr. Ken Hoganson

Using NeSI HPC Resources. NeSI Computational Science Team

Advanced Techniques with Newton. Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011

Table of Contents New User Orientation...1

SSH Connections MACs the MAC XTerm application can be used to create an ssh connection, no utility is needed.

OPERATING SYSTEM SERVICES

Batch Scheduling and Resource Management

Transcription:

MPI on morar and ARCHER

Access morar available directly from CP-Lab machines external access to morar: gateway: ssh X user@ph-cplab.ph.ed.ac.uk then: ssh X cplabxxx (pick your favourite machine) external access to ARCHER: ssh X user@login.archer.ac.uk You can access systems using ssh from anywhere Trivial for Linux Mac manually enable the X server to display any graphics Windows need to install an X server program, eg xming (which is free!)

Useful files and templates Take a copy of MPP-templates.tar stored at learn.ed.ac.uk unpack: tar xvf MPP-templates.tar

Compiling MPI Programs on morar Fortran programmers use mpif90 C programmers use mpicc There is nothing magic about these MPI compilers! simply wrappers which automatically include various libraries etc compilation done by standard (Portland Group) compilers pgf90 and pgcc You can use the supplied Makefiles for convenience make f Makefile_c make f Makefile_f90 Easiest to make a copy of one of these called Makefile also need to change the line MF= in the Makefile itself

Running interactively Timings will not be reliable shared with other users, many more processes than processors but very useful during development and for debugging mpiexec n 4./mpiprog.exe runs your code on 4 processes NOTE output might be buffered if your program crashes, you may see no output at all May need to explicitly flush prints to screen FLUSH(6) fflush(stdout);

Run via a batch system on morar we use Sun Grid Engine (SGE) submit a script that then launches your program Running on morar In MPP-templates/ is a standard batch script: mpibatch.sge make a copy of this file with a name that matches your executable, eg user@cplab$ cp mpibatch.sge hello.sge To run on 4 processors: qsub -pe mpi 4 hello.sge automatically runs executable called hello output will appear in a file called hello.sge.oxxxxx can follow job progress using qmon GUI or qstat or qstat u * script also times your program using the Unix time command full instructions included as comments in the template no need to alter the script - just rename it as appropriate eg to run a program pingpong make another copy called pingpong.sge

Morar idiosyncracies Do not use the default version of MPI very old and out-of-date Access correct version: module load mpich2-pgi add this to end of your.bash_profile file in home directory to check (similarly for mpif90) user@cplab$ which mpicc /opt/mpich2-pgi/bin/mpicc Output files will probably see a file called hello.sge.exxxxx contains a spurious error message ignore it!

Compiling MPI Programs on ARCHER Fortran programmers use ftn C programmers use cc There is nothing magic about these MPI compilers! simply wrappers which automatically include various libraries etc compilation done by standard (Cray) compilers crayftn and craycc You can use the supplied Makefiles for convenience make f Makefile_c make f Makefile_f90 Easiest to make a copy of one of these called Makefile also need to change the line MF= in the Makefile itself

ARCHER idiosyncracies Not possible to run directly on front-end Can be a substantial delay in batch queues we may sometimes have dedicated queues for the course instant turnaround! Cannot run from the home file system back-end nodes can only see the work file system Recommendation do everything in /work/ change directory to /work/y14/y14/guestxx/

Running on ARCHER back-end Run via a batch system on ARCHER we use the Portable Batch System (PBS) submit a script that then launches your program In MPP-templates/ is a standard batch script: mpibatch.pbs make a copy of this file with a name that matches your executable, eg user@archer$ cp mpibatch.pbs hello.pbs Submit: qsub q <reserved queue ID> hello.pbs we have a reserved queue RXXXXXX for the courses you will need to alter NPROCS (the argument to aprun ) by hand and select more than one node for more than 24 processes output will appear in a file called hello.pbs.oxxxxx can follow job progress using qstat command script also times your program using the Unix time command full instructions included as comments in the template

C++ Interface MPI is not an OO interface however, can be called from C++ Function calls are different, eg: MPI::Intracomm comm;... MPI::Init(); comm = MPI::COMM_WORLD; rank = comm.get_rank(); size = comm.get_size(); Compiler is called mpicxx see hello.cc and Makefile_cc C++ interface is now deprecated Advised to crosscall to C

Documentation MPI Standard available online See: http://www.mpi-forum.org/docs/ Available in printed form http://www.hlrs.de/mpi/mpi22/ Man pages available on CP-Lab and ARCHER must use the C style of naming: man MPI_Routine_name, eg: user@cplab$ man MPI_Init

MPI Books

Exercise: Hello World The minimal MPI program See Exercise 1 on the exercise sheet Write an MPI program that prints a message to the screen Main purpose is to get you compiling and running parallel programs on ness also illustrates the SPMD model and use of basic MPI calls We supply some very basic template code see pages 4 and 5 of the notes as well