# University of Amsterdam - SURFsara. High Performance Computing and Big Data Course

Size: px
Start display at page:

Download "University of Amsterdam - SURFsara. High Performance Computing and Big Data Course"

## Transcription

1 University of Amsterdam - SURFsara High Performance Computing and Big Data Course Workshop 7: OpenMP and MPI Assignments Clemens Grelck Roy Bakker Adam Belloum January 26, 2015

2 UvA / SURF-SARA HPC and Big Data Course 2 1 Getting started with OpenMP 1.1 Skeleton A very simple skeleton for OpenMP code is the following (let us call it foo.c): #include <omp.h> int main() omp_set_num_threads(8); return 0; We need to explicitly set the number of threads we want to use since we are executing it on a cluster with batch job submission system. In order to compile your code run: gcc -O3 -fopenmp foo.c -o foo. 1.2 Hello World Of course, your first program should be Hello World, but as we are using OpenMP we want each thread to say hello. Write a simple program that starts a parallel section where each thread says hello and prints out his/her thread id. Also make sure that the master thread (and only the master thread) prints the total number of threads in the parallel section. 1.3 Hello World 2.0 Now let us make Hello World slightly less trivial. Extend your code so that each thread says hello and that it is thread m of n. As a little challenge, you may request the number of active threads only once in the entire program. 2 Reductions Consider the code example in Fig. 1. const int n = 1000; int arr[n]; int i; int result = 0; for(i = 0; i < n; i++) arr[i] = i; for (i = 0; i < n; i++) result += arr[i]; Figure 1: Simple reduction code in (sequential) C

3 UvA / SURF-SARA HPC and Big Data Course 3 Exercise 3 Exercise 4 Parallelize the first for-loop. Parallelize the second for-loop using a critical section. Parallelize the second for-loop using the reduction clause. Compare your three implementations. Can you explain the performance difference? 3 Mandelbrot Fig. 2 shows a simple sequential C program that generates a Mandelbrot image, similar to the code used during the lecture. You can compile this code with gcc -fopenmp mandel.c -o mandel. When running the resulting binary, redirect the standard output to a file, e.g. mandel.ppm. To show your image run display mandel.ppm. For your comfort the code and a simple makefile are available at: Examine the code and try to parallelize it with OpenMP pragmas. Exercise 2 Parallelizing the outer loop is often a good idea. However, this might require to reorganize some of the code. Try to do this. Exercise 3 Try to parallelize your reorganized code as much as possible and experiment with different setups. How do you obtain the best performance? Exercise 4 Experiment with the different scheduling techniques that OpenMP provides. Can you measure performance differences? Which choice gives you the best runtimes? Exercise 5 Add a dithering step to the Mandelbrot example analogous to the example in the lecture. Can you measure a difference in performance if you use the nowait clause? Exercise 6 Parallelize the middle loop instead of the outer, and play with the collapse clause. Can you measure a performance difference between the three different versions? 4 MPI collective communication Collective communication is a form of structured communication, where instead of a dedicated sender and a dedicated receiver all MPI processes of a given communicator participate. A simple example is broadcast communication where a given MPI process sends a message to all other MPI processes in the communicator. Write a C+MPI function int MYMPI_Bcast( void *buffer, /* INOUT : buffer address */ int count, /* IN : buffer size */ MPI_Datatype datatype, /* IN : datatype of entry */ int root, /* IN : root process (sender) */ MPI_Comm communicator) /* IN : communicator */ that implements broadcast communication by means of point-to-point communication. Each MPI process of the given communicator is assumed to execute the broadcast function with the same message parameters and root process argument. The root process is supposed to send the content of its own buffer to all other processes of the communicator. Any non-root process uses the given buffer for receiving the corresponding data from the root process.

4 UvA / SURF-SARA HPC and Big Data Course 4 #include <stdio.h> #include <math.h> #include <stdlib.h> void put_pixel(int red, int green, int blue) fputc((char)red,stdout); fputc((char)green,stdout); fputc((char)blue,stdout); int main() double x,xx,y,cx,cy; const int itermax = 10000; /* how many iterations to do */ const double magnify=2.0; /* no magnification */ const int hxres = 1000; /* horizonal resolution */ const int hyres = 1000; /* vertical resolution */ int iter, hy, hx; /* header for PPM output */ printf("p6\n"); printf("%d %d\n255\n",hxres,hyres); for (hy=1; hy<=hyres; hy++) for (hx=1; hx<=hxres; hx++) cx = (((float)hx)/((float)hxres)-0.5)/magnify* ; cy = (((float)hy)/((float)hyres)-0.5)/magnify*3.0; x = 0.0; y = 0.0; for (iter = 1; iter < itermax; iter++) xx = x*x-y*y+cx; y = 2.0*x*y+cy; x = xx; if (x*x+y*y>100.0) iter = ; if (iter<99999) put_pixel(0,255,255); else put_pixel(180,0,0); return 0; Figure 2: Simple Mandelbrot implementation in (sequential) C

5 UvA / SURF-SARA HPC and Big Data Course 5 Implement the above function MYMPI_Bcast using point-to-point communication. Assume a fully-connected network topology, ie every node may send messages to any other node at the same price. Exercise 2 A particular advantage of collective communication operations is that their implementation can take advantage of the underlying physical network topology without making an MPI-based application program specific to any such topology. For your implementation of broadcast assume a 1-dimensional ring topology, where each node only has communication links with its two direct neighbours with (circularly) increasing and decreasing MPI process ids. While any MPI process may, nonetheless, send messages to any other MPI process, messages may need to be routed through a number of intermediate nodes. Communication cost can be assumed to be linear in the number of nodes involved. Aim for an efficient implementation of your function on an (imaginary) ring network topology. No experiments are required for this assignment, but describe your code in detail in a short report: how it is adapted to the given network topology and how many atomic communication events (sending a message from one node to a neighbouring node) are required to complete one broadcast. 5 Distributed Mandelbrot (optional) Parallelise the Mandelbrot code of Fig. 2 to run on scalable architectures using MPI message passing.

### Parallel Computing. Shared memory parallel programming with OpenMP

Parallel Computing Shared memory parallel programming with OpenMP Thorsten Grahs, 27.04.2015 Table of contents Introduction Directives Scope of data Synchronization 27.04.2015 Thorsten Grahs Parallel Computing

### Parallel Computing. Parallel shared memory computing with OpenMP

Parallel Computing Parallel shared memory computing with OpenMP Thorsten Grahs, 14.07.2014 Table of contents Introduction Directives Scope of data Synchronization OpenMP vs. MPI OpenMP & MPI 14.07.2014

### OpenMP & MPI CISC 879. Tristan Vanderbruggen & John Cavazos Dept of Computer & Information Sciences University of Delaware

OpenMP & MPI CISC 879 Tristan Vanderbruggen & John Cavazos Dept of Computer & Information Sciences University of Delaware 1 Lecture Overview Introduction OpenMP MPI Model Language extension: directives-based

### High Performance Computing

High Performance Computing Oliver Rheinbach oliver.rheinbach@math.tu-freiberg.de http://www.mathe.tu-freiberg.de/nmo/ Vorlesung Introduction to High Performance Computing Hörergruppen Woche Tag Zeit Raum

### #pragma omp critical x = x + 1; !\$OMP CRITICAL X = X + 1!\$OMP END CRITICAL. (Very inefficiant) example using critical instead of reduction:

omp critical The code inside a CRITICAL region is executed by only one thread at a time. The order is not specified. This means that if a thread is currently executing inside a CRITICAL region and another

### Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Pierre-Yves Taunay Research Computing and Cyberinfrastructure 224A Computer Building The Pennsylvania State University University

### COMP/CS 605: Introduction to Parallel Computing Lecture 21: Shared Memory Programming with OpenMP

COMP/CS 605: Introduction to Parallel Computing Lecture 21: Shared Memory Programming with OpenMP Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State

### Parallelization of video compressing with FFmpeg and OpenMP in supercomputing environment

Proceedings of the 9 th International Conference on Applied Informatics Eger, Hungary, January 29 February 1, 2014. Vol. 1. pp. 231 237 doi: 10.14794/ICAI.9.2014.1.231 Parallelization of video compressing

### Towards OpenMP Support in LLVM

Towards OpenMP Support in LLVM Alexey Bataev, Andrey Bokhanko, James Cownie Intel 1 Agenda What is the OpenMP * language? Who Can Benefit from the OpenMP language? OpenMP Language Support Early / Late

### Using the Intel Inspector XE

Using the Dirk Schmidl schmidl@rz.rwth-aachen.de Rechen- und Kommunikationszentrum (RZ) Race Condition Data Race: the typical OpenMP programming error, when: two or more threads access the same memory

### HPC Wales Skills Academy Course Catalogue 2015

HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses

### OpenACC 2.0 and the PGI Accelerator Compilers

OpenACC 2.0 and the PGI Accelerator Compilers Michael Wolfe The Portland Group michael.wolfe@pgroup.com This presentation discusses the additions made to the OpenACC API in Version 2.0. I will also present

### Objectives. Overview of OpenMP. Structured blocks. Variable scope, work-sharing. Scheduling, synchronization

OpenMP Objectives Overview of OpenMP Structured blocks Variable scope, work-sharing Scheduling, synchronization 1 Overview of OpenMP OpenMP is a collection of compiler directives and library functions

### LOAD BALANCING DISTRIBUTED OPERATING SYSTEMS, SCALABILITY, SS 2015. Hermann Härtig

LOAD BALANCING DISTRIBUTED OPERATING SYSTEMS, SCALABILITY, SS 2015 Hermann Härtig ISSUES starting points independent Unix processes and block synchronous execution who does it load migration mechanism

### What is Multi Core Architecture?

What is Multi Core Architecture? When a processor has more than one core to execute all the necessary functions of a computer, it s processor is known to be a multi core architecture. In other words, a

### Scalability evaluation of barrier algorithms for OpenMP

Scalability evaluation of barrier algorithms for OpenMP Ramachandra Nanjegowda, Oscar Hernandez, Barbara Chapman and Haoqiang H. Jin High Performance Computing and Tools Group (HPCTools) Computer Science

### Distributed communication-aware load balancing with TreeMatch in Charm++

Distributed communication-aware load balancing with TreeMatch in Charm++ The 9th Scheduling for Large Scale Systems Workshop, Lyon, France Emmanuel Jeannot Guillaume Mercier Francois Tessier In collaboration

### High performance computing systems. Lab 1

High performance computing systems Lab 1 Dept. of Computer Architecture Faculty of ETI Gdansk University of Technology Paweł Czarnul For this exercise, study basic MPI functions such as: 1. for MPI management:

### Parallel and Distributed Computing Programming Assignment 1

Parallel and Distributed Computing Programming Assignment 1 Due Monday, February 7 For programming assignment 1, you should write two C programs. One should provide an estimate of the performance of ping-pong

### A Comparison Of Shared Memory Parallel Programming Models. Jace A Mogill David Haglin

A Comparison Of Shared Memory Parallel Programming Models Jace A Mogill David Haglin 1 Parallel Programming Gap Not many innovations... Memory semantics unchanged for over 50 years 2010 Multi-Core x86

### Introduction. Reading. Today MPI & OpenMP papers Tuesday Commutativity Analysis & HPF. CMSC 818Z - S99 (lect 5)

Introduction Reading Today MPI & OpenMP papers Tuesday Commutativity Analysis & HPF 1 Programming Assignment Notes Assume that memory is limited don t replicate the board on all nodes Need to provide load

### Parallelization: Binary Tree Traversal

By Aaron Weeden and Patrick Royal Shodor Education Foundation, Inc. August 2012 Introduction: According to Moore s law, the number of transistors on a computer chip doubles roughly every two years. First

### 22S:295 Seminar in Applied Statistics High Performance Computing in Statistics

22S:295 Seminar in Applied Statistics High Performance Computing in Statistics Luke Tierney Department of Statistics & Actuarial Science University of Iowa August 30, 2007 Luke Tierney (U. of Iowa) HPC

### Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming

Overview Lecture 1: an introduction to CUDA Mike Giles mike.giles@maths.ox.ac.uk hardware view software view Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Lecture 1 p.

### Load Balancing. computing a file with grayscales. granularity considerations static work load assignment with MPI

Load Balancing 1 the Mandelbrot set computing a file with grayscales 2 Static Work Load Assignment granularity considerations static work load assignment with MPI 3 Dynamic Work Load Balancing scheduling

### Multicore Parallel Computing with OpenMP

Multicore Parallel Computing with OpenMP Tan Chee Chiang (SVU/Academic Computing, Computer Centre) 1. OpenMP Programming The death of OpenMP was anticipated when cluster systems rapidly replaced large

### Parallel Programming at the Exascale Era: A Case Study on Parallelizing Matrix Assembly For Unstructured Meshes

Parallel Programming at the Exascale Era: A Case Study on Parallelizing Matrix Assembly For Unstructured Meshes Eric Petit, Loïc Thebault, Quang V. Dinh May 2014 EXA2CT Consortium 2 WPs Organization Proto-Applications

### A Case Study - Scaling Legacy Code on Next Generation Platforms

Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 00 (2015) 000 000 www.elsevier.com/locate/procedia 24th International Meshing Roundtable (IMR24) A Case Study - Scaling Legacy

### Sources: On the Web: Slides will be available on:

C programming Introduction The basics of algorithms Structure of a C code, compilation step Constant, variable type, variable scope Expression and operators: assignment, arithmetic operators, comparison,

### A Pattern-Based Comparison of OpenACC & OpenMP for Accelerators

A Pattern-Based Comparison of OpenACC & OpenMP for Accelerators Sandra Wienke 1,2, Christian Terboven 1,2, James C. Beyer 3, Matthias S. Müller 1,2 1 IT Center, RWTH Aachen University 2 JARA-HPC, Aachen

### Parallel Programming with MPI on the Odyssey Cluster

Parallel Programming with MPI on the Odyssey Cluster Plamen Krastev Office: Oxford 38, Room 204 Email: plamenkrastev@fas.harvard.edu FAS Research Computing Harvard University Objectives: To introduce you

### An Introduction to Parallel Programming with OpenMP

An Introduction to Parallel Programming with OpenMP by Alina Kiessling E U N I V E R S I H T T Y O H F G R E D I N B U A Pedagogical Seminar April 2009 ii Contents 1 Parallel Programming with OpenMP 1

### PROGRAMMING MODEL EXAMPLES

( Cray Inc 2016) PROGRAMMING MODEL EXAMPLES DEMONSTRATION EXAMPLES OF VARIOUS PROGRAMMING MODELS OVERVIEW Building an application to use multiple processors (cores, cpus, nodes) can be done in various

### Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing

### Performance Analysis and Optimization Tool

Performance Analysis and Optimization Tool Andres S. CHARIF-RUBIAL andres.charif@uvsq.fr Performance Analysis Team, University of Versailles http://www.maqao.org Introduction Performance Analysis Develop

### An HPC Application Deployment Model on Azure Cloud for SMEs

An HPC Application Deployment Model on Azure Cloud for SMEs Fan Ding CLOSER 2013, Aachen, Germany, May 9th,2013 Rechen- und Kommunikationszentrum (RZ) Agenda Motivation Windows Azure Relevant Technology

7th Marathon of Parallel Programming WSCAD-SSC/SBAC-PAD-2012 October 17 th, 2012. Rules For all problems, read carefully the input and output session. For all problems, a sequential implementation is given,

### Debugging with TotalView

Tim Cramer 17.03.2015 IT Center der RWTH Aachen University Why to use a Debugger? If your program goes haywire, you may... ( wand (... buy a magic... read the source code again and again and...... enrich

### The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de

### Hands-on exercise: NPB-OMP / BT

Hands-on exercise: NPB-OMP / BT VI-HPS Team 1 Tutorial exercise objectives Familiarise with usage of VI-HPS tools complementary tools capabilities & interoperability Prepare to apply tools productively

### We will learn the Python programming language. Why? Because it is easy to learn and many people write programs in Python so we can share.

LING115 Lecture Note Session #4 Python (1) 1. Introduction As we have seen in previous sessions, we can use Linux shell commands to do simple text processing. We now know, for example, how to count words.

### Lecture 2 Parallel Programming Platforms

Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple

### 6.087 Lecture 1 January 11, 2010

6.087 Lecture 1 January 11, 2010 Introduction to C Writing C Programs Our First C Program 1 What is C? Dennis Ritchie AT&T Bell Laboratories 1972 16-bit DEC PDP-11 computer (right) Widely used today extends

### MPI Hands-On List of the exercises

MPI Hands-On List of the exercises 1 MPI Hands-On Exercise 1: MPI Environment.... 2 2 MPI Hands-On Exercise 2: Ping-pong...3 3 MPI Hands-On Exercise 3: Collective communications and reductions... 5 4 MPI

### Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi ICPP 6 th International Workshop on Parallel Programming Models and Systems Software for High-End Computing October 1, 2013 Lyon, France

### OpenMP Programming on ScaleMP

OpenMP Programming on ScaleMP Dirk Schmidl schmidl@rz.rwth-aachen.de Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign

### UTS: An Unbalanced Tree Search Benchmark

UTS: An Unbalanced Tree Search Benchmark LCPC 2006 1 Coauthors Stephen Olivier, UNC Jun Huan, UNC/Kansas Jinze Liu, UNC Jan Prins, UNC James Dinan, OSU P. Sadayappan, OSU Chau-Wen Tseng, UMD Also, thanks

### Parallel Algorithm Engineering

Parallel Algorithm Engineering Kenneth S. Bøgh PhD Fellow Based on slides by Darius Sidlauskas Outline Background Current multicore architectures UMA vs NUMA The openmp framework Examples Software crisis

### Common Mistakes in OpenMP and How To Avoid Them

Common Mistakes in OpenMP and How To Avoid Them A Collection of Best Practices Michael Süß and Claudia Leopold University of Kassel, Research Group Programming Languages / Methodologies, Wilhelmshöher

### Session 2: MUST. Correctness Checking

Center for Information Services and High Performance Computing (ZIH) Session 2: MUST Correctness Checking Dr. Matthias S. Müller (RWTH Aachen University) Tobias Hilbrich (Technische Universität Dresden)

### Grid Engine Users Guide. 2011.11p1 Edition

Grid Engine Users Guide 2011.11p1 Edition Grid Engine Users Guide : 2011.11p1 Edition Published Nov 01 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the

### Basic Concepts in Parallelization

1 Basic Concepts in Parallelization Ruud van der Pas Senior Staff Engineer Oracle Solaris Studio Oracle Menlo Park, CA, USA IWOMP 2010 CCS, University of Tsukuba Tsukuba, Japan June 14-16, 2010 2 Outline

### To copy all examples and exercises to your local scratch directory type: /g/public/training/openmp/setup.sh

OpenMP by Example To copy all examples and exercises to your local scratch directory type: /g/public/training/openmp/setup.sh To build one of the examples, type make (where is the

### Chao Chen 1 Michael Lang 2 Yong Chen 1. IEEE BigData, 2013. Department of Computer Science Texas Tech University

Chao Chen 1 Michael Lang 2 1 1 Data-Intensive Scalable Laboratory Department of Computer Science Texas Tech University 2 Los Alamos National Laboratory IEEE BigData, 2013 Outline 1 2 3 4 Outline 1 2 3

### A Performance Data Storage and Analysis Tool

A Performance Data Storage and Analysis Tool Steps for Using 1. Gather Machine Data 2. Build Application 3. Execute Application 4. Load Data 5. Analyze Data 105% Faster! 72% Slower Build Application Execute

### Rule-Based Program Transformation for Hybrid Architectures CSW Workshop Towards Portable Libraries for Hybrid Systems

Rule-Based Program Transformation for Hybrid Architectures CSW Workshop Towards Portable Libraries for Hybrid Systems M. Carro 1,2, S. Tamarit 2, G. Vigueras 1, J. Mariño 2 1 IMDEA Software Institute,

### OpenMP* 4.0 for HPC in a Nutshell

OpenMP* 4.0 for HPC in a Nutshell Dr.-Ing. Michael Klemm Senior Application Engineer Software and Services Group (michael.klemm@intel.com) *Other brands and names are the property of their respective owners.

### Fortran Program Development with Visual Studio* 2005 ~ Use Intel Visual Fortran with Visual Studio* ~

Fortran Program Development with Visual Studio* 2005 ~ Use Intel Visual Fortran with Visual Studio* ~ 31/Oct/2006 Software &Solutions group * Agenda Features of Intel Fortran Compiler Integrate with Visual

### CP Lab 2: Writing programs for simple arithmetic problems

Computer Programming (CP) Lab 2, 2015/16 1 CP Lab 2: Writing programs for simple arithmetic problems Instructions The purpose of this Lab is to guide you through a series of simple programming problems,

### PARALLEL JAVASCRIPT. Norm Rubin (NVIDIA) Jin Wang (Georgia School of Technology)

PARALLEL JAVASCRIPT Norm Rubin (NVIDIA) Jin Wang (Georgia School of Technology) JAVASCRIPT Not connected with Java Scheme and self (dressed in c clothing) Lots of design errors (like automatic semicolon

### MPI-Checker Static Analysis for MPI

MPI-Checker Static Analysis for MPI Alexander Droste, Michael Kuhn, Thomas Ludwig November 15, 2015 Motivation 2 / 39 Why is runtime analysis in HPC challenging? Large amount of resources are used State

### C Primer. Fall Introduction C vs. Java... 1

CS 33 Intro Computer Systems Doeppner C Primer Fall 2016 Contents 1 Introduction 1 1.1 C vs. Java.......................................... 1 2 Functions 1 2.1 The main() Function....................................

### Workshare Process of Thread Programming and MPI Model on Multicore Architecture

Vol., No. 7, 011 Workshare Process of Thread Programming and MPI Model on Multicore Architecture R. Refianti 1, A.B. Mutiara, D.T Hasta 3 Faculty of Computer Science and Information Technology, Gunadarma

### Analysis of Binary Search algorithm and Selection Sort algorithm

Analysis of Binary Search algorithm and Selection Sort algorithm In this section we shall take up two representative problems in computer science, work out the algorithms based on the best strategy to

### Scalability and Classifications

Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static

### A Multi-layered Domain-specific Language for Stencil Computations

A Multi-layered Domain-specific Language for Stencil Computations Christian Schmitt, Frank Hannig, Jürgen Teich Hardware/Software Co-Design, University of Erlangen-Nuremberg Workshop ExaStencils 2014,

### OpenACC Basics Directive-based GPGPU Programming

OpenACC Basics Directive-based GPGPU Programming Sandra Wienke, M.Sc. wienke@rz.rwth-aachen.de Center for Computing and Communication RWTH Aachen University Rechen- und Kommunikationszentrum (RZ) PPCES,

### MAQAO Performance Analysis and Optimization Tool

MAQAO Performance Analysis and Optimization Tool Andres S. CHARIF-RUBIAL andres.charif@uvsq.fr Performance Evaluation Team, University of Versailles S-Q-Y http://www.maqao.org VI-HPS 18 th Grenoble 18/22

### Windows IB. Introduction to Windows 2003 Compute Cluster Edition. Eric Lantz Microsoft elantz@microsoft.com

Datacenter Fabric Workshop Windows IB Introduction to Windows 2003 Compute Cluster Edition Eric Lantz Microsoft elantz@microsoft.com August 22, 2005 What this talk is not about High Availability, Fail-over

### Multi-Threading Performance on Commodity Multi-Core Processors

Multi-Threading Performance on Commodity Multi-Core Processors Jie Chen and William Watson III Scientific Computing Group Jefferson Lab 12000 Jefferson Ave. Newport News, VA 23606 Organization Introduction

### Partitioning under the hood in MySQL 5.5

Partitioning under the hood in MySQL 5.5 Mattias Jonsson, Partitioning developer Mikael Ronström, Partitioning author Who are we? Mikael is a founder of the technology behind NDB

### Development of a Visualization Scheme for Large-scale Data on the Earth Simulator

Chapter 3 Computer Science Development of a Visualization Scheme for Large-scale Data on the Earth Simulator Group Representative Yoshio Suzuki R&D Group for Parallel Processing Tools, Center for Promotion

### Lecture 6: Introduction to MPI programming. Lecture 6: Introduction to MPI programming p. 1

Lecture 6: Introduction to MPI programming Lecture 6: Introduction to MPI programming p. 1 MPI (message passing interface) MPI is a library standard for programming distributed memory MPI implementation(s)

### Static vs. Dynamic. Lecture 10: Static Semantics Overview 1. Typical Semantic Errors: Java, C++ Typical Tasks of the Semantic Analyzer

Lecture 10: Static Semantics Overview 1 Lexical analysis Produces tokens Detects & eliminates illegal tokens Parsing Produces trees Detects & eliminates ill-formed parse trees Static semantic analysis

### SYMETRIX SOLUTIONS: TECH TIP April 2014

Understanding Class C Masking In the day to day support operations a Symertix tech support agent will most often come across customers that have their PC and Symetrix DSP on a Class C Mask of 255.255.255.0.

### Using Parallel Computing to Run Multiple Jobs

Beowulf Training Using Parallel Computing to Run Multiple Jobs Jeff Linderoth August 5, 2003 August 5, 2003 Beowulf Training Running Multiple Jobs Slide 1 Outline Introduction to Scheduling Software The

### Performance analysis with Periscope

Performance analysis with Periscope M. Gerndt, V. Petkov, Y. Oleynik, S. Benedict Technische Universität München September 2010 Outline Motivation Periscope architecture Periscope performance analysis

### FPGA area allocation for parallel C applications

1 FPGA area allocation for parallel C applications Vlad-Mihai Sima, Elena Moscu Panainte, Koen Bertels Computer Engineering Faculty of Electrical Engineering, Mathematics and Computer Science Delft University

### The Double-layer Master-Slave Model : A Hybrid Approach to Parallel Programming for Multicore Clusters

The Double-layer Master-Slave Model : A Hybrid Approach to Parallel Programming for Multicore Clusters User s Manual for the HPCVL DMSM Library Gang Liu and Hartmut L. Schmider High Performance Computing

### Designing and Building Applications for Extreme Scale Systems CS598 William Gropp www.cs.illinois.edu/~wgropp

Designing and Building Applications for Extreme Scale Systems CS598 William Gropp www.cs.illinois.edu/~wgropp Welcome! Who am I? William (Bill) Gropp Professor of Computer Science One of the Creators of

### Introduction to Hybrid Programming

Introduction to Hybrid Programming Hristo Iliev Rechen- und Kommunikationszentrum aixcelerate 2012 / Aachen 10. Oktober 2012 Version: 1.1 Rechen- und Kommunikationszentrum (RZ) Motivation for hybrid programming

### Application Note: AN00121 Using XMOS TCP/IP Library for UDP-based Networking

Application Note: AN00121 Using XMOS TCP/IP Library for UDP-based Networking This application note demonstrates the use of XMOS TCP/IP stack on an XMOS multicore micro controller to communicate on an ethernet-based

### Parallel programming with Session Java

1/17 Parallel programming with Session Java Nicholas Ng (nickng@doc.ic.ac.uk) Imperial College London 2/17 Motivation Parallel designs are difficult, error prone (eg. MPI) Session types ensure communication

### CUDA Programming. Week 4. Shared memory and register

CUDA Programming Week 4. Shared memory and register Outline Shared memory and bank confliction Memory padding Register allocation Example of matrix-matrix multiplication Homework SHARED MEMORY AND BANK

### Massachusetts Institute of Technology 6.005: Elements of Software Construction Fall 2011 Quiz 2 November 21, 2011 SOLUTIONS.

Massachusetts Institute of Technology 6.005: Elements of Software Construction Fall 2011 Quiz 2 November 21, 2011 Name: SOLUTIONS Athena* User Name: Instructions This quiz is 50 minutes long. It contains

### MUSIC Multi-Simulation Coordinator. Users Manual. Örjan Ekeberg and Mikael Djurfeldt

MUSIC Multi-Simulation Coordinator Users Manual Örjan Ekeberg and Mikael Djurfeldt March 3, 2009 Abstract MUSIC is an API allowing large scale neuron simulators using MPI internally to exchange data during

### XMOS Programming Guide

XMOS Programming Guide Document Number: Publication Date: 2014/10/9 XMOS 2014, All Rights Reserved. XMOS Programming Guide 2/108 SYNOPSIS This document provides a consolidated guide on how to program XMOS

### Application Performance Analysis Tools and Techniques

Mitglied der Helmholtz-Gemeinschaft Application Performance Analysis Tools and Techniques 2012-06-27 Christian Rössel Jülich Supercomputing Centre c.roessel@fz-juelich.de EU-US HPC Summer School Dublin

### Lecture 1: Data Storage & Index

Lecture 1: Data Storage & Index R&G Chapter 8-11 Concurrency control Query Execution and Optimization Relational Operators File & Access Methods Buffer Management Disk Space Management Recovery Manager

### Wind River Device Management. 2007 Wind River

Wind River Device Management Quality, TTM and Uptime Challenges Development SW Quality Assurance Deployment Implementation Software Integration Testing Verification Testing Validation Testing Field Trial

### A Pattern-Based Approach to Automated Application Performance Analysis

A Pattern-Based Approach to Automated Application Performance Analysis Nikhil Bhatia, Shirley Moore, Felix Wolf, and Jack Dongarra Innovative Computing Laboratory University of Tennessee {bhatia, shirley,

### Questions 1 through 25 are worth 2 points each. Choose one best answer for each.

Questions 1 through 25 are worth 2 points each. Choose one best answer for each. 1. For the singly linked list implementation of the queue, where are the enqueues and dequeues performed? c a. Enqueue in

### SURFsara HPC Cloud Workshop

SURFsara HPC Cloud Workshop doc.hpccloud.surfsara.nl UvA workshop 2016-01-25 UvA HPC Course Jan 2016 Anatoli Danezi, Markus van Dijk cloud-support@surfsara.nl Agenda Introduction and Overview (current

### Spring 2011 Prof. Hyesoon Kim

Spring 2011 Prof. Hyesoon Kim Today, we will study typical patterns of parallel programming This is just one of the ways. Materials are based on a book by Timothy. Decompose Into tasks Original Problem

### Building an energy dashboard. Energy measurement and visualization in current HPC systems

Building an energy dashboard Energy measurement and visualization in current HPC systems Thomas Geenen 1/58 thomas.geenen@surfsara.nl SURFsara The Dutch national HPC center 2H 2014 > 1PFlop GPGPU accelerators

### Systolic Computing. Fundamentals

Systolic Computing Fundamentals Motivations for Systolic Processing PARALLEL ALGORITHMS WHICH MODEL OF COMPUTATION IS THE BETTER TO USE? HOW MUCH TIME WE EXPECT TO SAVE USING A PARALLEL ALGORITHM? HOW

### An Open MPI-based Cloud Computing Service Architecture

An Open MPI-based Cloud Computing Service Architecture WEI-MIN JENG and HSIEH-CHE TSAI Department of Computer Science Information Management Soochow University Taipei, Taiwan {wjeng, 00356001}@csim.scu.edu.tw