High performance computing systems. Lab 1

Size: px
Start display at page:

Download "High performance computing systems. Lab 1"

Transcription

1 High performance computing systems Lab 1 Dept. of Computer Architecture Faculty of ETI Gdansk University of Technology Paweł Czarnul For this exercise, study basic MPI functions such as: 1. for MPI management: MPI_Init(...), MPI_Finalize(), Each MPI program should start with MPI_Init(...) and finish with MPI_Finalize(). Each process can fetch the number of processes in the default communicator MPI_COMM_WORLD (the application) by calling MPI_Comm_size (see the example below). Processes in an MPI application are identified by so-called ranks ranging from 0 to n-1 where n is the number of processes returned by MPI_Comm_size(). Based on the rank, each process can perform a part of all required computations so that all processes contribute to the final goal and process all required data. 2. for point-to-point communication: MPI_Send(...), MPI_Recv(...), int MPI_Send(void *buf, int count, MPI_Datatype dtype, int dest, int tag, MPI_Comm comm) MPI_Send sends data pointed by buf to process with rank dest. There should be count elements of data type dtype. For instance, when sending 5 doubles, count should be 5 and dtype should be MPI_DOUBLE. tag can be any number which additionally describes the message and comm can be MPI_COMM_WORLD for the default communicator. int MPI_Recv(void *buf, int count, MPI_Datatype dtype, int src, int tag, MPI_Comm comm, MPI_Status *stat) MPI_Recv is a blocking receive which waits for a message with tag tag from process with rank src in communicator comm. Dtype and count denote the type and the number of elements which are to be received and stored in buf. Stat holds information about the received message. 3. for collective communication: MPI_Barrier(...), MPI_Gather(...), MPI_Scatter(...), MPI_Allgather(...).

2 As an example, int MPI_Reduce(void *sbuf, void* rbuf, int count, MPI_Datatype dtype, MPI_Op op, int root, MPI_Comm comm) reduces all values given by processes in communicator comm to a single value in process with rank root. See the code below for adding numbers given by all processes to a single value in process 0. Study the following tutorial on MPI: The following example computes pi in parallel using an old method from the 17 th century: Pi/4=1/1 1/3 + 1/5 1/7 + 1/9. (1) Note that the program works for any number of processes requested. Successive elements of (1) are assigned to successive processes with ranks from 0 to (proccount-1). For 2 processes: Pi/4 = 1/1 1/3 + 1/5 1/7 + 1/9. process For 3 processes: Pi/4 = 1/1 1/3 + 1/5 1/7 + 1/9 1/11. process etc. This is a simple load balancing technique. For example, checking if successive numbers are prime numbers might involve more time for larger numbers. This strategy balances the execution time among processes quite well. Note that in reality we only consider a predefined number of elements in (1). In general, we should make sure that the data types used for adding the numbers can store resulting subsums. #include <stdio.h> #include <mpi.h>

3 int main(int argc, char **argv) { double precision= ; int myrank,proccount; double pi,pi_final; int mine,sign; int i; // Initialize MPI MPI_Init(&argc, &argv); // find out my rank MPI_Comm_rank(MPI_COMM_WORLD, &myrank); // find out the number of processes in MPI_COMM_WORLD MPI_Comm_size(MPI_COMM_WORLD, &proccount); // now distribute the required precision if (precision<proccount) { printf("precision smaller than the number of processes - try again."); MPI_Finalize(); return -1; } // each process performs computations on its part pi=0; mine=myrank*2+1; sign=(((mine-1)/2)%2)?-1:1; for (;mine<precision;) { // printf("\nprocess %d %d %d", myrank,sign,mine); // fflush(stdout); pi+=sign/(double)mine; mine+=2*proccount; sign=(((mine-1)/2)%2)?-1:1; } // now merge the numbers to rank 0 MPI_Reduce(&pi,&pi_final,1, MPI_DOUBLE,MPI_SUM,0, MPI_COMM_WORLD); if (!myrank) {

4 } pi_final*=4; printf("pi=%f",pi_final); // Shut down MPI MPI_Finalize(); return 0; } Assuming the code was saved in file program.c, we have to: 1. compile the code: mpicc program.c 2. run it 1 process: [klaster@n01 1]$ time mpirun -np 1./a.out real 0m9.286s user 0m9.244s sys 0m0.037s 2 processes: [klaster@n01 1]$ time mpirun -np 2./a.out real 0m4.706s user 0m9.286s sys 0m0.063s 4 processes: [klaster@n01 1]$ time mpirun -np 4./a.out real 0m2.420s user 0m9.380s sys 0m0.118s Note smaller execution times for larger numbers of processes used for computations. Lab 527: For this lab, you can use the default MPI implementation on desxx computers in the lab (XX range from 01 to 18) Open MPI.

5 Compile the code: mpicc program.c create a configuration for the virtual machine in this case just 2 nodes (des01 and des02): student@des01:~> cat > machinefile des01 des02 then invoke the application for 1 process (running on des01): student@des01:~> mpirun -machinefile./machinefile -np 1 time./a.out 9.25user 0.01system 0:09.27elapsed 99%CPU (0avgtext+0avgdata 13008maxresident)k 0inputs+0outputs (0major+1009minor)pagefaults 0swaps and 2 processes (running on des01 and des02): student@des01:~> mpirun -machinefile./machinefile -np 2 time./a.out 4.63user 0.01system 0:04.65elapsed 99%CPU (0avgtext+0avgdata 13072maxresident)k 0inputs+0outputs (0major+1013minor)pagefaults 0swaps 4.63user 0.01system 0:04.67elapsed 99%CPU (0avgtext+0avgdata 13312maxresident)k 0inputs+0outputs (0major+1023minor)pagefaults 0swaps You can create a larger virtual machine and test the scalability of the application. Lab 527: You can also use mpich on desxx: student@des01:~> /opt/mpich/ch-p4/bin/mpicc program.c program.c: In function main : program.c:12:7: warning: unused variable i student@des01:~> scp a.out des02:~ a.out 100% 1427KB 1.4MB/s 00:00 student@des01:~> scp a.out des03:~ a.out 100% 1427KB 1.4MB/s 00:00 student@des01:~> scp a.out des04:~ a.out

6 now run the code: 1 process student@des01:~> /opt/mpich/ch-p4/bin/mpirun -np 1 -machinefile./machinefile./a.out student@des01:~> 2 processes student@des01:~> /opt/mpich/ch-p4/bin/mpirun -np 2 -machinefile./machinefile./a.out student@des01:~> 4 processes student@des01:~> /opt/mpich/ch-p4/bin/mpirun -np 4 -machinefile./machinefile./a.out cluster KASK: reach the cluster by ssh studentx@n01.eti.pg.gda.pl X is a number from 1 to 18 The following MPI implementations are available on cluster KASK (use a full path for running mpicc and mpirun): 1. MPICH executables such as mpicc and mpirun available in /opt/mpich2/gnu/bin/ 2. Open MPI - executables in /opt/sun-ct/bin/ 3. MVAPICH executables in /usr/mpi/gcc/mvapich-1.2.0/bin/ Note: the following nodes are available on the cluster: n01 access node compute-0-0 compute-0-1 compute-0-8 Bibliography MPI Docs

Lecture 6: Introduction to MPI programming. Lecture 6: Introduction to MPI programming p. 1

Lecture 6: Introduction to MPI programming. Lecture 6: Introduction to MPI programming p. 1 Lecture 6: Introduction to MPI programming Lecture 6: Introduction to MPI programming p. 1 MPI (message passing interface) MPI is a library standard for programming distributed memory MPI implementation(s)

More information

Lightning Introduction to MPI Programming

Lightning Introduction to MPI Programming Lightning Introduction to MPI Programming May, 2015 What is MPI? Message Passing Interface A standard, not a product First published 1994, MPI-2 published 1997 De facto standard for distributed-memory

More information

LOAD BALANCING DISTRIBUTED OPERATING SYSTEMS, SCALABILITY, SS 2015. Hermann Härtig

LOAD BALANCING DISTRIBUTED OPERATING SYSTEMS, SCALABILITY, SS 2015. Hermann Härtig LOAD BALANCING DISTRIBUTED OPERATING SYSTEMS, SCALABILITY, SS 2015 Hermann Härtig ISSUES starting points independent Unix processes and block synchronous execution who does it load migration mechanism

More information

To connect to the cluster, simply use a SSH or SFTP client to connect to:

To connect to the cluster, simply use a SSH or SFTP client to connect to: RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, cluster-head.ce.rit.edu, serves as the master controller or

More information

Parallelization: Binary Tree Traversal

Parallelization: Binary Tree Traversal By Aaron Weeden and Patrick Royal Shodor Education Foundation, Inc. August 2012 Introduction: According to Moore s law, the number of transistors on a computer chip doubles roughly every two years. First

More information

HPCC - Hrothgar Getting Started User Guide MPI Programming

HPCC - Hrothgar Getting Started User Guide MPI Programming HPCC - Hrothgar Getting Started User Guide MPI Programming High Performance Computing Center Texas Tech University HPCC - Hrothgar 2 Table of Contents 1. Introduction... 3 2. Setting up the environment...

More information

Session 2: MUST. Correctness Checking

Session 2: MUST. Correctness Checking Center for Information Services and High Performance Computing (ZIH) Session 2: MUST Correctness Checking Dr. Matthias S. Müller (RWTH Aachen University) Tobias Hilbrich (Technische Universität Dresden)

More information

HP-MPI User s Guide. 11th Edition. Manufacturing Part Number : B6060-96024 September 2007

HP-MPI User s Guide. 11th Edition. Manufacturing Part Number : B6060-96024 September 2007 HP-MPI User s Guide 11th Edition Manufacturing Part Number : B6060-96024 September 2007 Copyright 1979-2007 Hewlett-Packard Development Company, L.P. Table 1 Revision history Edition MPN Description Eleventh

More information

MPI Application Development Using the Analysis Tool MARMOT

MPI Application Development Using the Analysis Tool MARMOT MPI Application Development Using the Analysis Tool MARMOT HLRS High Performance Computing Center Stuttgart Allmandring 30 D-70550 Stuttgart http://www.hlrs.de 24.02.2005 1 Höchstleistungsrechenzentrum

More information

Parallel Programming with MPI on the Odyssey Cluster

Parallel Programming with MPI on the Odyssey Cluster Parallel Programming with MPI on the Odyssey Cluster Plamen Krastev Office: Oxford 38, Room 204 Email: plamenkrastev@fas.harvard.edu FAS Research Computing Harvard University Objectives: To introduce you

More information

Message Passing with MPI

Message Passing with MPI Message Passing with MPI Hristo Iliev (Христо Илиев) PPCES 2012, 2013 Christian Iwainsky PPCES 2011 Rechen- und Kommunikationszentrum (RZ) Agenda Motivation MPI Part 1 Concepts Point-to-point communication

More information

Load Balancing. computing a file with grayscales. granularity considerations static work load assignment with MPI

Load Balancing. computing a file with grayscales. granularity considerations static work load assignment with MPI Load Balancing 1 the Mandelbrot set computing a file with grayscales 2 Static Work Load Assignment granularity considerations static work load assignment with MPI 3 Dynamic Work Load Balancing scheduling

More information

Introduction to Hybrid Programming

Introduction to Hybrid Programming Introduction to Hybrid Programming Hristo Iliev Rechen- und Kommunikationszentrum aixcelerate 2012 / Aachen 10. Oktober 2012 Version: 1.1 Rechen- und Kommunikationszentrum (RZ) Motivation for hybrid programming

More information

MPI-Checker Static Analysis for MPI

MPI-Checker Static Analysis for MPI MPI-Checker Static Analysis for MPI Alexander Droste, Michael Kuhn, Thomas Ludwig November 15, 2015 Motivation 2 / 39 Why is runtime analysis in HPC challenging? Large amount of resources are used State

More information

Compute Cluster Server Lab 3: Debugging the parallel MPI programs in Microsoft Visual Studio 2005

Compute Cluster Server Lab 3: Debugging the parallel MPI programs in Microsoft Visual Studio 2005 Compute Cluster Server Lab 3: Debugging the parallel MPI programs in Microsoft Visual Studio 2005 Compute Cluster Server Lab 3: Debugging the parallel MPI programs in Microsoft Visual Studio 2005... 1

More information

Why Choose C/C++ as the programming language? Parallel Programming in C/C++ - OpenMP versus MPI

Why Choose C/C++ as the programming language? Parallel Programming in C/C++ - OpenMP versus MPI Parallel Programming (Multi/cross-platform) Why Choose C/C++ as the programming language? Compiling C/C++ on Windows (for free) Compiling C/C++ on other platforms for free is not an issue Parallel Programming

More information

MPI Runtime Error Detection with MUST For the 13th VI-HPS Tuning Workshop

MPI Runtime Error Detection with MUST For the 13th VI-HPS Tuning Workshop MPI Runtime Error Detection with MUST For the 13th VI-HPS Tuning Workshop Joachim Protze and Felix Münchhalfen IT Center RWTH Aachen University February 2014 Content MPI Usage Errors Error Classes Avoiding

More information

Introduction to MPI Programming!

Introduction to MPI Programming! Introduction to MPI Programming! Rocks-A-Palooza II! Lab Session! 2006 UC Regents! 1! Modes of Parallel Computing! SIMD - Single Instruction Multiple Data!!processors are lock-stepped : each processor

More information

WinBioinfTools: Bioinformatics Tools for Windows Cluster. Done By: Hisham Adel Mohamed

WinBioinfTools: Bioinformatics Tools for Windows Cluster. Done By: Hisham Adel Mohamed WinBioinfTools: Bioinformatics Tools for Windows Cluster Done By: Hisham Adel Mohamed Objective Implement and Modify Bioinformatics Tools To run under Windows Cluster Project : Research Project between

More information

Parallel Computing. Parallel shared memory computing with OpenMP

Parallel Computing. Parallel shared memory computing with OpenMP Parallel Computing Parallel shared memory computing with OpenMP Thorsten Grahs, 14.07.2014 Table of contents Introduction Directives Scope of data Synchronization OpenMP vs. MPI OpenMP & MPI 14.07.2014

More information

Introduction. Reading. Today MPI & OpenMP papers Tuesday Commutativity Analysis & HPF. CMSC 818Z - S99 (lect 5)

Introduction. Reading. Today MPI & OpenMP papers Tuesday Commutativity Analysis & HPF. CMSC 818Z - S99 (lect 5) Introduction Reading Today MPI & OpenMP papers Tuesday Commutativity Analysis & HPF 1 Programming Assignment Notes Assume that memory is limited don t replicate the board on all nodes Need to provide load

More information

Outline. Project. Bestandteile der Veranstaltung. Termine. Optimierung von Multi-Core Systemen. Prof. Dr. Sabine Glesner

Outline. Project. Bestandteile der Veranstaltung. Termine. Optimierung von Multi-Core Systemen. Prof. Dr. Sabine Glesner Outline Project Optimierung von Multi-Core Systemen Lecture 1 Prof. Dr. Sabine Glesner SS 2012 Technische Universität Berlin Termine Bestandteile Anmeldung Prüfungsmodalitäten Bewertung Prof. Glesner Optimierung

More information

RA MPI Compilers Debuggers Profiling. March 25, 2009

RA MPI Compilers Debuggers Profiling. March 25, 2009 RA MPI Compilers Debuggers Profiling March 25, 2009 Examples and Slides To download examples on RA 1. mkdir class 2. cd class 3. wget http://geco.mines.edu/workshop/class2/examples/examples.tgz 4. tar

More information

High-Performance Computing: Architecture and APIs

High-Performance Computing: Architecture and APIs High-Performance Computing: Architecture and APIs Douglas Fuller ASU Fulton High Performance Computing Why HPC? Capacity computing Do similar jobs, but lots of them. Capability computing Run programs we

More information

Introduction to Cloud Computing

Introduction to Cloud Computing Introduction to Cloud Computing Distributed Systems 15-319, spring 2010 11 th Lecture, Feb 16 th Majd F. Sakr Lecture Motivation Understand Distributed Systems Concepts Understand the concepts / ideas

More information

Chao Chen 1 Michael Lang 2 Yong Chen 1. IEEE BigData, 2013. Department of Computer Science Texas Tech University

Chao Chen 1 Michael Lang 2 Yong Chen 1. IEEE BigData, 2013. Department of Computer Science Texas Tech University Chao Chen 1 Michael Lang 2 1 1 Data-Intensive Scalable Laboratory Department of Computer Science Texas Tech University 2 Los Alamos National Laboratory IEEE BigData, 2013 Outline 1 2 3 4 Outline 1 2 3

More information

PREDICTIVE ANALYSIS OF MESSAGE PASSING APPLICATIONS

PREDICTIVE ANALYSIS OF MESSAGE PASSING APPLICATIONS PREDICTIVE ANALYSIS OF MESSAGE PASSING APPLICATIONS by Subodh Sharma A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor

More information

MARMOT- MPI Analysis and Checking Tool Demo with blood flow simulation. Bettina Krammer, Matthias Müller krammer@hlrs.de, mueller@hlrs.

MARMOT- MPI Analysis and Checking Tool Demo with blood flow simulation. Bettina Krammer, Matthias Müller krammer@hlrs.de, mueller@hlrs. MARMOT- MPI Analysis and Checking Tool Demo with blood flow simulation Bettina Krammer, Matthias Müller krammer@hlrs.de, mueller@hlrs.de HLRS High Performance Computing Center Stuttgart Allmandring 30

More information

Debugging with TotalView

Debugging with TotalView Tim Cramer 17.03.2015 IT Center der RWTH Aachen University Why to use a Debugger? If your program goes haywire, you may... ( wand (... buy a magic... read the source code again and again and...... enrich

More information

16 node Linux cluster at SCFBio

16 node Linux cluster at SCFBio Clustering Tutorial What is Clustering? Clustering is the use of multiple computers, typically PCs or UNIX workstations, multiple storage devices, and redundant interconnections, to form what appears to

More information

LS-DYNA Scalability on Cray Supercomputers. Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp.

LS-DYNA Scalability on Cray Supercomputers. Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp. LS-DYNA Scalability on Cray Supercomputers Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp. WP-LS-DYNA-12213 www.cray.com Table of Contents Abstract... 3 Introduction... 3 Scalability

More information

University of Notre Dame

University of Notre Dame University of Notre Dame MPI Tutorial Part 1 Introduction Laboratory for Scientific Computing Fall 1998 http://www.lam-mpi.org/tutorials/nd/ lam@lam-mpi.org Fall 1998 1 Tutorial Instructors M.D. McNally

More information

Application Performance Tools on Discover

Application Performance Tools on Discover Application Performance Tools on Discover Tyler Simon 21 May 2009 Overview 1. ftnchek - static Fortran code analysis 2. Cachegrind - source annotation for cache use 3. Ompp - OpenMP profiling 4. IPM MPI

More information

Advanced MPI. Hybrid programming, profiling and debugging of MPI applications. Hristo Iliev RZ. Rechen- und Kommunikationszentrum (RZ)

Advanced MPI. Hybrid programming, profiling and debugging of MPI applications. Hristo Iliev RZ. Rechen- und Kommunikationszentrum (RZ) Advanced MPI Hybrid programming, profiling and debugging of MPI applications Hristo Iliev RZ Rechen- und Kommunikationszentrum (RZ) Agenda Halos (ghost cells) Hybrid programming Profiling of MPI applications

More information

COMP/CS 605: Introduction to Parallel Computing Lecture 21: Shared Memory Programming with OpenMP

COMP/CS 605: Introduction to Parallel Computing Lecture 21: Shared Memory Programming with OpenMP COMP/CS 605: Introduction to Parallel Computing Lecture 21: Shared Memory Programming with OpenMP Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State

More information

Introduction to grid technologies, parallel and cloud computing. Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber

Introduction to grid technologies, parallel and cloud computing. Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber Introduction to grid technologies, parallel and cloud computing Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber OUTLINES Grid Computing Parallel programming technologies (MPI- Open MP-Cuda )

More information

University of Amsterdam - SURFsara. High Performance Computing and Big Data Course

University of Amsterdam - SURFsara. High Performance Computing and Big Data Course University of Amsterdam - SURFsara High Performance Computing and Big Data Course Workshop 7: OpenMP and MPI Assignments Clemens Grelck C.Grelck@uva.nl Roy Bakker R.Bakker@uva.nl Adam Belloum A.S.Z.Belloum@uva.nl

More information

INF-110. GPFS Installation

INF-110. GPFS Installation INF-110 GPFS Installation Overview Plan the installation Before installing any software, it is important to plan the GPFS installation by choosing the hardware, deciding which kind of disk connectivity

More information

The Double-layer Master-Slave Model : A Hybrid Approach to Parallel Programming for Multicore Clusters

The Double-layer Master-Slave Model : A Hybrid Approach to Parallel Programming for Multicore Clusters The Double-layer Master-Slave Model : A Hybrid Approach to Parallel Programming for Multicore Clusters User s Manual for the HPCVL DMSM Library Gang Liu and Hartmut L. Schmider High Performance Computing

More information

Parallel and Distributed Computing Programming Assignment 1

Parallel and Distributed Computing Programming Assignment 1 Parallel and Distributed Computing Programming Assignment 1 Due Monday, February 7 For programming assignment 1, you should write two C programs. One should provide an estimate of the performance of ping-pong

More information

Automated Testing of Installed Software

Automated Testing of Installed Software Automated Testing of Installed Software or so far, How to validate MPI stacks of an HPC cluster? Xavier Besseron HPC and Computational Science @ FOSDEM 2014 February 1, 2014 Automated Testing of Installed

More information

Parallel I/O on Mira Venkat Vishwanath and Kevin Harms

Parallel I/O on Mira Venkat Vishwanath and Kevin Harms Parallel I/O on Mira Venkat Vishwanath and Kevin Harms Argonne Na*onal Laboratory venkat@anl.gov ALCF-2 I/O Infrastructure Mira BG/Q Compute Resource Tukey Analysis Cluster 48K Nodes 768K Cores 10 PFlops

More information

Analysis and Implementation of Cluster Computing Using Linux Operating System

Analysis and Implementation of Cluster Computing Using Linux Operating System IOSR Journal of Computer Engineering (IOSRJCE) ISSN: 2278-0661 Volume 2, Issue 3 (July-Aug. 2012), PP 06-11 Analysis and Implementation of Cluster Computing Using Linux Operating System Zinnia Sultana

More information

Allinea Performance Reports User Guide. Version 6.0.6

Allinea Performance Reports User Guide. Version 6.0.6 Allinea Performance Reports User Guide Version 6.0.6 Contents Contents 1 1 Introduction 4 1.1 Online Resources...................................... 4 2 Installation 5 2.1 Linux/Unix Installation...................................

More information

Message Passing Interface (MPI)

Message Passing Interface (MPI) Message Passing Interface (MPI) Jalel Chergui Isabelle Dupays Denis Girou Pierre-François Lavallée Dimitri Lecas Philippe Wautelet MPI Plan I 1 Introduction... 7 1.1 Availability and updating... 8 1.2

More information

Performance and scalability of MPI on PC clusters

Performance and scalability of MPI on PC clusters CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE Concurrency Computat.: Pract. Exper. 24; 16:79 17 (DOI: 1.12/cpe.749) Performance Performance and scalability of MPI on PC clusters Glenn R. Luecke,,

More information

OpenMP & MPI CISC 879. Tristan Vanderbruggen & John Cavazos Dept of Computer & Information Sciences University of Delaware

OpenMP & MPI CISC 879. Tristan Vanderbruggen & John Cavazos Dept of Computer & Information Sciences University of Delaware OpenMP & MPI CISC 879 Tristan Vanderbruggen & John Cavazos Dept of Computer & Information Sciences University of Delaware 1 Lecture Overview Introduction OpenMP MPI Model Language extension: directives-based

More information

Hybrid Programming with MPI and OpenMP

Hybrid Programming with MPI and OpenMP Hybrid Programming with and OpenMP Ricardo Rocha and Fernando Silva Computer Science Department Faculty of Sciences University of Porto Parallel Computing 2015/2016 R. Rocha and F. Silva (DCC-FCUP) Programming

More information

How to Run Parallel Jobs Efficiently

How to Run Parallel Jobs Efficiently How to Run Parallel Jobs Efficiently Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education May 9, 2013 1 The big picture: running parallel jobs on Hoffman2

More information

Basic Concepts in Parallelization

Basic Concepts in Parallelization 1 Basic Concepts in Parallelization Ruud van der Pas Senior Staff Engineer Oracle Solaris Studio Oracle Menlo Park, CA, USA IWOMP 2010 CCS, University of Tsukuba Tsukuba, Japan June 14-16, 2010 2 Outline

More information

Charm++, what s that?!

Charm++, what s that?! Charm++, what s that?! Les Mardis du dev François Tessier - Runtime team October 15, 2013 François Tessier Charm++ 1 / 25 Outline 1 Introduction 2 Charm++ 3 Basic examples 4 Load Balancing 5 Conclusion

More information

Parallel Astronomical Data Processing or How to Build a Beowulf Class Cluster for High Performance Computing?

Parallel Astronomical Data Processing or How to Build a Beowulf Class Cluster for High Performance Computing? Parallel Astronomical Data Processing or How to Build a Beowulf Class Cluster for High Performance Computing? n.saini1@nuigalway.ie Version 1.0 Centre for Astronomy, School of Physics National University

More information

MUSIC Multi-Simulation Coordinator. Users Manual. Örjan Ekeberg and Mikael Djurfeldt

MUSIC Multi-Simulation Coordinator. Users Manual. Örjan Ekeberg and Mikael Djurfeldt MUSIC Multi-Simulation Coordinator Users Manual Örjan Ekeberg and Mikael Djurfeldt March 3, 2009 Abstract MUSIC is an API allowing large scale neuron simulators using MPI internally to exchange data during

More information

The CNMS Computer Cluster

The CNMS Computer Cluster The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

GRID Computing: CAS Style

GRID Computing: CAS Style CS4CC3 Advanced Operating Systems Architectures Laboratory 7 GRID Computing: CAS Style campus trunk C.I.S. router "birkhoff" server The CAS Grid Computer 100BT ethernet node 1 "gigabyte" Ethernet switch

More information

SQLITE C/C++ TUTORIAL

SQLITE C/C++ TUTORIAL http://www.tutorialspoint.com/sqlite/sqlite_c_cpp.htm SQLITE C/C++ TUTORIAL Copyright tutorialspoint.com Installation Before we start using SQLite in our C/C++ programs, we need to make sure that we have

More information

Kommunikation in HPC-Clustern

Kommunikation in HPC-Clustern Kommunikation in HPC-Clustern Communication/Computation Overlap in MPI W. Rehm and T. Höfler Department of Computer Science TU Chemnitz http://www.tu-chemnitz.de/informatik/ra 11.11.2005 Outline 1 2 Optimize

More information

SGE Roll: Users Guide. Version @VERSION@ Edition

SGE Roll: Users Guide. Version @VERSION@ Edition SGE Roll: Users Guide Version @VERSION@ Edition SGE Roll: Users Guide : Version @VERSION@ Edition Published Aug 2006 Copyright 2006 UC Regents, Scalable Systems Table of Contents Preface...i 1. Requirements...1

More information

Asynchronous Dynamic Load Balancing (ADLB)

Asynchronous Dynamic Load Balancing (ADLB) Asynchronous Dynamic Load Balancing (ADLB) A high-level, non-general-purpose, but easy-to-use programming model and portable library for task parallelism Rusty Lusk Mathema.cs and Computer Science Division

More information

CSC230 Getting Starting in C. Tyler Bletsch

CSC230 Getting Starting in C. Tyler Bletsch CSC230 Getting Starting in C Tyler Bletsch What is C? The language of UNIX Procedural language (no classes) Low-level access to memory Easy to map to machine language Not much run-time stuff needed Surprisingly

More information

ADVANCED MPI. Dr. David Cronk Innovative Computing Lab University of Tennessee

ADVANCED MPI. Dr. David Cronk Innovative Computing Lab University of Tennessee ADVANCED MPI Dr. David Cronk Innovative Computing Lab University of Tennessee Day 1 Morning - Lecture Course Outline Communicators/Groups Extended Collective Communication One-sided Communication Afternoon

More information

Agenda. Using HPC Wales 2

Agenda. Using HPC Wales 2 Using HPC Wales Agenda Infrastructure : An Overview of our Infrastructure Logging in : Command Line Interface and File Transfer Linux Basics : Commands and Text Editors Using Modules : Managing Software

More information

Bright Cluster Manager 5.2. User Manual. Revision: 3324. Date: Fri, 30 Nov 2012

Bright Cluster Manager 5.2. User Manual. Revision: 3324. Date: Fri, 30 Nov 2012 Bright Cluster Manager 5.2 User Manual Revision: 3324 Date: Fri, 30 Nov 2012 Table of Contents Table of Contents........................... i 1 Introduction 1 1.1 What Is A Beowulf Cluster?..................

More information

Linux Cluster Computing An Administrator s Perspective

Linux Cluster Computing An Administrator s Perspective Linux Cluster Computing An Administrator s Perspective Robert Whitinger Traques LLC and High Performance Computing Center East Tennessee State University : http://lxer.com/pub/self2015_clusters.pdf 2015-Jun-14

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu Grid 101 Josh Hegie grid@unr.edu http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging

More information

Static Approximation of MPI Communication Graphs for Optimized Process Placement

Static Approximation of MPI Communication Graphs for Optimized Process Placement Static Approximation of MPI Communication Graphs for Optimized Process Placement Andrew J. McPherson 1, Vijay Nagarajan 1, and Marcelo Cintra 2 1 School of Informatics, University of Edinburgh 2 Intel

More information

Libmonitor: A Tool for First-Party Monitoring

Libmonitor: A Tool for First-Party Monitoring Libmonitor: A Tool for First-Party Monitoring Mark W. Krentel Dept. of Computer Science Rice University 6100 Main St., Houston, TX 77005 krentel@rice.edu ABSTRACT Libmonitor is a library that provides

More information

System Software for High Performance Computing. Joe Izraelevitz

System Software for High Performance Computing. Joe Izraelevitz System Software for High Performance Computing Joe Izraelevitz Agenda Overview of Supercomputers Blue Gene/Q System LoadLeveler Job Scheduler General Parallel File System HPC at UR What is a Supercomputer?

More information

Parallel Computing. Shared memory parallel programming with OpenMP

Parallel Computing. Shared memory parallel programming with OpenMP Parallel Computing Shared memory parallel programming with OpenMP Thorsten Grahs, 27.04.2015 Table of contents Introduction Directives Scope of data Synchronization 27.04.2015 Thorsten Grahs Parallel Computing

More information

Retargeting PLAPACK to Clusters with Hardware Accelerators

Retargeting PLAPACK to Clusters with Hardware Accelerators Retargeting PLAPACK to Clusters with Hardware Accelerators Manuel Fogué 1 Francisco Igual 1 Enrique S. Quintana-Ortí 1 Robert van de Geijn 2 1 Departamento de Ingeniería y Ciencia de los Computadores.

More information

How To Visualize Performance Data In A Computer Program

How To Visualize Performance Data In A Computer Program Performance Visualization Tools 1 Performance Visualization Tools Lecture Outline : Following Topics will be discussed Characteristics of Performance Visualization technique Commercial and Public Domain

More information

P1 P2 P3. Home (p) 1. Diff (p) 2. Invalidation (p) 3. Page Request (p) 4. Page Response (p)

P1 P2 P3. Home (p) 1. Diff (p) 2. Invalidation (p) 3. Page Request (p) 4. Page Response (p) ËÓØÛÖ ØÖÙØ ËÖ ÅÑÓÖÝ ÓÚÖ ÎÖØÙÐ ÁÒØÖ ÖØØÙÖ ÁÑÔÐÑÒØØÓÒ Ò ÈÖÓÖÑÒ ÅÙÖÐÖÒ ÊÒÖÒ Ò ÄÚÙ ÁØÓ ÔÖØÑÒØ Ó ÓÑÔÙØÖ ËÒ ÊÙØÖ ÍÒÚÖ ØÝ È ØÛÝ Æ ¼¹¼½ ÑÙÖÐÖ ØÓ ºÖÙØÖ ºÙ ØÖØ ÁÒ Ø ÔÔÖ Û Ö Ò ÑÔÐÑÒØØÓÒ Ó ÓØÛÖ ØÖÙØ ËÖ ÅÑÓÖÝ Ëŵ

More information

Network Performance Studies in High Performance Computing Environments

Network Performance Studies in High Performance Computing Environments Network Performance Studies in High Performance Computing Environments by Ben Huang Supervised by Dr. Michael Bauer and Dr. Michael Katchabaw Graduate Program in Computer Science A thesis submitted in

More information

Cloud Computing through Virtualization and HPC technologies

Cloud Computing through Virtualization and HPC technologies Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

GPI Global Address Space Programming Interface

GPI Global Address Space Programming Interface GPI Global Address Space Programming Interface SEPARS Meeting Stuttgart, December 2nd 2010 Dr. Mirko Rahn Fraunhofer ITWM Competence Center for HPC and Visualization 1 GPI Global address space programming

More information

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Josef Pelikán Charles University in Prague, KSVI Department, Josef.Pelikan@mff.cuni.cz Abstract 1 Interconnect quality

More information

Experiences with HPC on Windows

Experiences with HPC on Windows Experiences with on Christian Terboven terboven@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University Server Computing Summit 2008 April 7 11, HPI/Potsdam Experiences with on

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

Dynamic Ranking of Cloud Providers

Dynamic Ranking of Cloud Providers Dynamic Ranking of Cloud Providers Paweł Czarnul Dept. of Computer Architecture Faculty of Electronics, Telecommunications and Informatics Gdansk University of Technology G. Narutowicza, 11/12, 80-233,

More information

How to build a Beowulf Cluster Applied to Computational Electromagnetic

How to build a Beowulf Cluster Applied to Computational Electromagnetic How to build a Beowulf Cluster Applied to Computational Electromagnetic Carlos Henrique da Silva Santos Leonardo André Ambrosio Hugo Enrique Hernández Figueroa Department of Microwaves and Optics (DMO)

More information

R and High-Performance Computing

R and High-Performance Computing R and High-Performance Computing A (Somewhat Brief and Personal) Overview Dirk Eddelbuettel ISM HPCCON 2015 & ISM HPC on R Workshop The Institute of Statistical Mathematics, Tokyo, Japan October 9-12,

More information

Debugging and Profiling Lab. Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma carlos@tacc.utexas.edu

Debugging and Profiling Lab. Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma carlos@tacc.utexas.edu Debugging and Profiling Lab Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma carlos@tacc.utexas.edu Setup Login to Ranger: - ssh -X username@ranger.tacc.utexas.edu Make sure you can export graphics

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

Notes on the SNOW/Rmpi R packages with OpenMPI and Sun Grid Engine

Notes on the SNOW/Rmpi R packages with OpenMPI and Sun Grid Engine Notes on the SNOW/Rmpi R packages with OpenMPI and Sun Grid Engine Last updated: 6/2/2008 4:43PM EDT We informally discuss the basic set up of the R Rmpi and SNOW packages with OpenMPI and the Sun Grid

More information

Big Data Evaluator 2.1: User Guide

Big Data Evaluator 2.1: User Guide University of A Coruña Computer Architecture Group Big Data Evaluator 2.1: User Guide Authors: Jorge Veiga, Roberto R. Expósito, Guillermo L. Taboada and Juan Touriño May 5, 2016 Contents 1 Overview 3

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

Informatica e Sistemi in Tempo Reale

Informatica e Sistemi in Tempo Reale Informatica e Sistemi in Tempo Reale Introduction to C programming Giuseppe Lipari http://retis.sssup.it/~lipari Scuola Superiore Sant Anna Pisa October 25, 2010 G. Lipari (Scuola Superiore Sant Anna)

More information

MapReduce Evaluator: User Guide

MapReduce Evaluator: User Guide University of A Coruña Computer Architecture Group MapReduce Evaluator: User Guide Authors: Jorge Veiga, Roberto R. Expósito, Guillermo L. Taboada and Juan Touriño December 9, 2014 Contents 1 Overview

More information

HPC Applications Scalability. Gilad@hpcadvisorycouncil.com

HPC Applications Scalability. Gilad@hpcadvisorycouncil.com HPC Applications Scalability Gilad@hpcadvisorycouncil.com Applications Best Practices LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator D. E. Shaw Research Desmond NWChem MPQC - Massively

More information

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming Overview Lecture 1: an introduction to CUDA Mike Giles mike.giles@maths.ox.ac.uk hardware view software view Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Lecture 1 p.

More information

Cloud-based OpenMP Parallelization Using a MapReduce Runtime. Rodolfo Wottrich, Rodolfo Azevedo and Guido Araujo University of Campinas

Cloud-based OpenMP Parallelization Using a MapReduce Runtime. Rodolfo Wottrich, Rodolfo Azevedo and Guido Araujo University of Campinas Cloud-based OpenMP Parallelization Using a MapReduce Runtime Rodolfo Wottrich, Rodolfo Azevedo and Guido Araujo University of Campinas 1 MPI_Init(NULL, NULL); MPI_Comm_size(MPI_COMM_WORLD, &comm_sz); MPI_Comm_rank(MPI_COMM_WORLD,

More information

Performance analysis with Periscope

Performance analysis with Periscope Performance analysis with Periscope M. Gerndt, V. Petkov, Y. Oleynik, S. Benedict Technische Universität München September 2010 Outline Motivation Periscope architecture Periscope performance analysis

More information

Many-task applications in use today with a look toward the future

Many-task applications in use today with a look toward the future Many-task applications in use today with a look toward the future Alan Gara IBM Research Lots of help form Mark Megerian, IBM 1 Outline Review of Many-Task motivations on supercomputers and observations

More information

High Performance Computing. MPI and PETSc

High Performance Computing. MPI and PETSc High Performance Computing. MPI and PETSc Mario Storti Centro de Investigación de Métodos Computacionales - CIMEC (CONICET-UNL), Santa Fe, Argentina http://www.cimec.org.ar/mstorti,

More information

Parallel Debugging with DDT

Parallel Debugging with DDT Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece

More information