Advanced Operating Systems CS428



Similar documents
Designing and Building Applications for Extreme Scale Systems CS598 William Gropp

CS 3530 Operating Systems. L02 OS Intro Part 1 Dr. Ken Hoganson

Matlab on a Supercomputer

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

Faculty of Engineering Student Number:

Outline. hardware components programming environments. installing Python executing Python code. decimal and binary notations running Sage

Parallel Debugging with DDT

High Performance Computing. Course Notes HPC Fundamentals

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Middleware and Distributed Systems. Introduction. Dr. Martin v. Löwis

CS101 Lecture 11: Number Systems and Binary Numbers. Aaron Stevens 14 February 2011

Chapter 6: Programming Languages

ADVANCED SCHOOL OF SYSTEMS AND DATA STUDIES (ASSDAS) PROGRAM: CTech in Computer Science

A Dude probing SNMP! Building custom probes and configuring equipment using SNMP with The Dude. Andrea Coppini AIR Wireless - Malta andrea@air.com.

Computer Science. Requirements for the Major (updated 11/13/03)

HPC Wales Skills Academy Course Catalogue 2015

Grid 101. Grid 101. Josh Hegie.

Parallel Computing with Mathematica UVACSE Short Course

Streamline Computing Linux Cluster User Training. ( Nottingham University)

CS/COE

Computer Science. General Education Students must complete the requirements shown in the General Education Requirements section of this catalog.

2) What is the structure of an organization? Explain how IT support at different organizational levels.

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

Matrix Multiplication

A numerically adaptive implementation of the simplex method

University of Hull Department of Computer Science. Wrestling with Python Week 01 Playing with Python

Building an Inexpensive Parallel Computer

MFCF Grad Session 2015

An Incomplete C++ Primer. University of Wyoming MA 5310

Origins of Operating Systems OS/360. Martin Grund HPI

CS555: Distributed Systems [Fall 2015] Dept. Of Computer Science, Colorado State University

McMPI. Managed-code MPI library in Pure C# Dr D Holmes, EPCC dholmes@epcc.ed.ac.uk

Chapter One Introduction to Programming

Big Data Analytics. Tools and Techniques

Parallel Algorithm Engineering

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Introduction to Python

W4118 Operating Systems. Instructor: Junfeng Yang

CHAPTER 3 LOAD BALANCING MECHANISM USING MOBILE AGENTS

Home Phone Call Forward Guide

Vorlesung Rechnerarchitektur 2 Seite 178 DASH

How To Understand The Concept Of A Distributed System

Optimal Scheduling for Dependent Details Processing Using MS Excel Solver

Scalable Data Analysis in R. Lee E. Edlefsen Chief Scientist UserR! 2011

Computer Science. 232 Computer Science. Degrees and Certificates Awarded. A.S. Degree Requirements. Program Student Outcomes. Department Offices

CA NSM System Monitoring Option for OpenVMS r3.2

Sources: On the Web: Slides will be available on:

Big Data Systems CS 5965/6965 FALL 2015

Automating Big Data Benchmarking for Different Architectures with ALOJA

PIC 10A. Lecture 7: Graphics II and intro to the if statement

Computer Architecture. Secure communication and encryption.

Imam Mohammad Ibn Saud Islamic University College of Computer and Information Sciences Department of Computer Sciences

Assessment Plan for CS and CIS Degree Programs Computer Science Dept. Texas A&M University - Commerce

Parallelization: Binary Tree Traversal

University of Dayton Department of Computer Science Undergraduate Programs Assessment Plan DRAFT September 14, 2011

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

Fundamentals of Computer Programming CS 101 (3 Units)

MPI Hands-On List of the exercises

Tools for Performance Debugging HPC Applications. David Skinner

Information Systems. Administered by the Department of Mathematical and Computing Sciences within the College of Arts and Sciences.

Introduction to GPU Programming Languages

MSU Tier 3 Usage and Troubleshooting. James Koll

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es)

Calling Feature Instructions Digital Phone By Telephone

Parallel Computing. Benson Muite. benson.

Static vs. Dynamic. Lecture 10: Static Semantics Overview 1. Typical Semantic Errors: Java, C++ Typical Tasks of the Semantic Analyzer

Symantec Endpoint Protection Shared Insight Cache User Guide

Computer Programming I & II*

CS3600 SYSTEMS AND NETWORKS

Introduction to Scientific Computing

by the matrix A results in a vector which is a reflection of the given

CS101 Lecture 24: Thinking in Python: Input and Output Variables and Arithmetic. Aaron Stevens 28 March Overview/Questions

MID YEAR PERFORMANCE REVIEW ANSWERS

64-Bit versus 32-Bit CPUs in Scientific Computing

A Simultaneous Solution for General Linear Equations on a Ring or Hierarchical Cluster

Algebra 2 Notes AII.7 Functions: Review, Domain/Range. Function: Domain: Range:

Agenda. Using HPC Wales 2

Parallel and Distributed Computing Programming Assignment 1

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Operating System Engineering: Fall 2005

4.1. Title: data analysis (systems analysis) Annotation of educational discipline: educational discipline includes in itself the mastery of the

Objectives. Python Programming: An Introduction to Computer Science. Lab 01. What we ll learn in this class

CA NSM System Monitoring. Option for OpenVMS r3.2. Benefits. The CA Advantage. Overview

CS 51 Intro to CS. Art Lee. September 2, 2014

Transcription:

Advanced Operating Systems CS428 Lecture TEN Semester I, 2009-10 Graham Ellis NUI Galway, Ireland

DIY Parallelism MPI is useful for C and Fortran programming.

DIY Parallelism MPI is useful for C and Fortran programming. When using higher-level computational software (such as GAP, Singular, Macaulay, GBParis, Cocoa,...) with no in-built functions for parallelism the user could develop her/his own message passing interface for parallel computing.

DIY Parallelism MPI is useful for C and Fortran programming. When using higher-level computational software (such as GAP, Singular, Macaulay, GBParis, Cocoa,...) with no in-built functions for parallelism the user could develop her/his own message passing interface for parallel computing. We ll consider an example developed for the GAP package HAP.

Brief description of HAP HAP is aimed at computations in algebraic topology (see here).

Brief description of HAP HAP is aimed at computations in algebraic topology (see here). It is distributed with GAP and loaded by typing the following command at the GAP prompt. gap> LoadPackage("hap");

Brief description of HAP HAP is aimed at computations in algebraic topology (see here). It is distributed with GAP and loaded by typing the following command at the GAP prompt. gap> LoadPackage("hap"); Many computations in algebraic topology require significant memory and significant cpu time.

Parallel computation using HAP To help with large computations the user can start one or more copies of GAP as new processes. The following starts the new processes on the local machine. gap> s:=childprocess();

Parallel computation using HAP To help with large computations the user can start one or more copies of GAP as new processes. The following starts the new processes on the local machine. gap> s:=childprocess(); The following starts the new process on a remote machine. gap> t:=childprocess(alberti.nuigalway.ie);

Parallel computation using HAP To help with large computations the user can start one or more copies of GAP as new processes. The following starts the new processes on the local machine. gap> s:=childprocess(); The following starts the new process on a remote machine. gap> t:=childprocess(alberti.nuigalway.ie); The core functions for handling child processes in HAP are described here.

Parallel computation using HAP To help with large computations the user can start one or more copies of GAP as new processes. The following starts the new processes on the local machine. gap> s:=childprocess(); The following starts the new process on a remote machine. gap> t:=childprocess(alberti.nuigalway.ie); The core functions for handling child processes in HAP are described here. Some simple parallel computations are described here.

Load balancing in HAP: ParallelList In GAP the command List(L,f); inputs a list L and a function f. It returns the list obtained by applying f to each element in L.

Load balancing in HAP: ParallelList In GAP the command List(L,f); inputs a list L and a function f. It returns the list obtained by applying f to each element in L. The HAP command ParallelList(L,"f",S); inputs a list L, a string name "f" for a function f and a list S of child processes. It returns the list obtained by applying f to each element in L.

Load balancing in HAP: ParallelList In GAP the command List(L,f); inputs a list L and a function f. It returns the list obtained by applying f to each element in L. The HAP command ParallelList(L,"f",S); inputs a list L, a string name "f" for a function f and a list S of child processes. It returns the list obtained by applying f to each element in L. ParallelList(L,"f",S); runs through the elements of the list L and, for each element x, waits until some process in S is available for computation; it then requests this process to compute f(x).

Load balancing in HAP: ParallelList In GAP the command List(L,f); inputs a list L and a function f. It returns the list obtained by applying f to each element in L. The HAP command ParallelList(L,"f",S); inputs a list L, a string name "f" for a function f and a list S of child processes. It returns the list obtained by applying f to each element in L. ParallelList(L,"f",S); runs through the elements of the list L and, for each element x, waits until some process in S is available for computation; it then requests this process to compute f(x). The same simple algorithm is used in post offices to deal with a queues of people. The algorithm achieves an optimal load balance.

Passing complicated data types in HAP One limitation to MPI is that it is not easy to pass complicated data types from one process to another. Only basic data types (integers, floating point number,...) can be passed easily.

Passing complicated data types in HAP One limitation to MPI is that it is not easy to pass complicated data types from one process to another. Only basic data types (integers, floating point number,...) can be passed easily. In HAP the function HAPPrintTo("file",X) can be used to write a complicated data type to a file. The function HAPRead("file",X) can be used to read the data type into GAP. These two functions can be used to transport complicated data types between processes.

Passing complicated data types in HAP One limitation to MPI is that it is not easy to pass complicated data types from one process to another. Only basic data types (integers, floating point number,...) can be passed easily. In HAP the function HAPPrintTo("file",X) can be used to write a complicated data type to a file. The function HAPRead("file",X) can be used to read the data type into GAP. These two functions can be used to transport complicated data types between processes. A non-trivial example is given at the end of this page.