bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20.

Size: px
Start display at page:

Download "bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20."

Transcription

1 bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20. October 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

2 Course Organization bwgrid Treff Participants Current users of the bwgrid Clusters HD/MA Students and scientists interested in Grid Computing Members of the Universities Heidelberg and Mannheim Scope bwgrid Status and Plans Lectures and/or Workshops User contributions To meet you in person Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

3 Course Organization bwgrid Treff Summer Term 2010 Main Focus: 29. April bwgrid Status and Interconnection HD/MA 20. May Batch-System and Parallel execution of serial jobs 17. June Parallel Programming with Java (Part I) 15. July Parallel Programming with Java (Part II) Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

4 Course Organization bwgrid Treff Fall/Winter 2010/2011 Lecture times: Mannheim: HS Heidelberg: WS Date Location Main Focus MA Matlab & Matlab Clones MA Statistics Software R Nov 2010 HD Dec 2010 HD Jan 2011 HD Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

5 Course Organization bwgrid Treff 20. October 2010 Agenda for today: bwgrid News Working with the statistical software package R on bwgrid Using R for data analysis on cluster computers: distributed computing and statistical modeling with MPI and Rmpi on bwgrid (D. Junge, MZES, University of Mannheim) Discussion of topics for further meetings Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

6 bwgrid News bwgrid News Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

7 bwgrid News What is bwgrid? D-Grid Community project of the Universities in BW Compute Clusters at 8 locations Central storage unit in Karlsruhe Distributed system with local administration Computing centers focus on software in different fields of research Access via at least one middleware supported by D-Grid Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

8 bwgrid News bwgrid Resources Compute cluster: Mannheim/Heidelberg: 280 nodes Direct Interconnection Frankfurt Karlsruhe: 140 nodes Stuttgart: 420 nodes Tübingen: 140 nodes Mannheim (interconnected to a single cluster) Heidelberg Ulm (Konstanz): 280 nodes Hardware in Ulm Karlsruhe Stuttgart Freiburg: 140 nodes Esslingen Esslingen: 180 nodes more recent Hardware Central storage: Freiburg Tübingen Ulm (joint cluster with Konstanz) München Karlsruhe: 128 TB (with Backup) 256 TB (without Backup) Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

9 bwgrid News Access Possibilities Important! Access with local Accounts: Project numbers and User IDs (URZ); User IDs (RUM) Access only to bwgrid cluster MA/HD Access with Grid Certificate: Grid Certificate, VO membership, Grid Middleware Access to all bwgrid resources Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

10 bwgrid User Support bwgrid News General Information: Hardware and software Project descriptions Grid access (server addresses) bwgrid Portals (in development) D-Grid user support: Trouble ticket system News module for maintenances User Support available at all sites: Login messages Local webpages, wikis, mailing lists address for local support Local news man bw-grid Terms and conditions for using bwgrid Home directory, scratch/work space, /tmp Special features Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

11 bwgrid News bwgrid Cluster Mannheim/Heidelberg Belwue VORM Benutzer Benutzer PBS Benutzer Benutzer VORM LDAP Admin AD Cluster Mannheim InfiniBand Obsidian + ADVA passwd InfiniBand Cluster Heidelberg Lustre bwfs MA Lustre bwfs HD Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

12 bwgrid News bwgrid Cluster Mannheim/Heidelberg News Reservation of all compute nodes for tomorrow bwgrid storage in Heidelberg available since end of August Workspaces for temporary data (no backup) $HOME now limited to 100 GB per user (with backup) Project descriptions for New software modules??? Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

13 R on bwgrid Working with R on bwgrid Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

14 What is R? R on bwgrid R is a language and environment for statistical computing and graphics... Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

15 R on bwgrid What is R? R provides a wide variety of statistical and graphical techniques Statistical techniques include linear and nonlinear modeling, classical statistical tests, time-series analysis, classification, clustering,... Well-designed publication-quality plots can be produced, including mathematical symbols and formulae R is highly extensible (via packages; integration in Latex, Open-office; Fortran and C extensions) Free Software (GNU General Public License) Runs on different platforms (Unix, Windows, MacOS) Excellent documentation (Reference manual with about 1500 pages) Excellent support via mailing list Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

16 R on bwgrid More about R Main Project Page: Download, Mailing Lists, Manuals, FAQ s,... Tutorial by G. Sawitzki (Applied Mathematics, Uni Heidelberg) R at LMU Munich: R courses, publications, parallel R Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

17 R on bwgrid R on bwgrid List of available software modules module avail R versions Version set environment with command module load math/r/ module load math/r/ module load math/r/ comes with OS module load math/r/ Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

18 Interactive Usage of R R on bwgrid Request a compute node qsub -I -X -l walltime=3:00:00 Load R module module load math/r/ Start R command line interface R Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

19 R on bwgrid Installation of R package Within R use command install.packages( package-name ) Allow that an installation directory for R packages is created in your home directory Select download mirror Dependent packages are automatically installed Packages must be installed for each version of R separately Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

20 R on bwgrid R Batch Jobs Read Module Help module help math/r/ Starting a single R program example script: bwgrid-r.pbs replace example R-program with your R-program only reasonable if your program needs a lot of memory Starting multiple R programs on one node example script: bwgrid-r-multi.pbs + task.sh determine number of cores in bwgrid-r-multi.pbs edit task.sh to fit your needs Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

21 R on bwgrid Parallel Programming Shared memory: Multithreading methods Parallelization whereby the manager (master) thread forks a number of worker (slave) threads and the tasks are divided among them. OpenMP (shared memory programming via compiler directives in C/C++ and Fortan) Pthreads (C libraries for creating and managing POSIX threads) Distributed memory: Message passing methods Work is divided among processes and the communication is made by sending messages between processes. MPI (Message Passing Interface; standardized and portable message-passing libraries) PVM (Parallel Virtual Machine; for using a network of heterogeneous Unix and/or Windows machines) Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

22 Parallel R packages R on bwgrid M. Schmidberger et al (Tech. Report 47, Department of Statistics, LMU) Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

23 R on bwgrid MPI Basics Number of available processes is given at start of program Each process runs on a single core with its own data Processes communicate via message passing to receive data which is in the memory of other processes MPI implementation provides library with message passing functions Some MPI concepts: Point-to-point communication (different modes) Collective communication (broadcast, gather, all-to-all) Communicators (to group processes) Virtual Topologies (to determine neighbors of processes) Error handling Speed-up depends on time spend for message passing Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

24 R on bwgrid Rmpi Package Package information: Provides interface to MPI functions from R Requires installed MPI implementation Package includes scripts to launch R instances Programmer determines workload of the processes and the communication between the processes Rmpi tutorial: Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

25 R on bwgrid Rmpi Installation Download Rmpi package: wget tar.gz Load R module: module load math/r/ Make installation directory: mkdir -p $HOME/R/2.11.1/Rmpi Install package with: R CMD INSTALL \ --configure-args=--with-mpi=/opt/bwgrid/mpi/openmpi/1.4.2-gnu-4.1 \ -l $HOME/R/2.11.1/Rmpi Rmpi tar.gz Example: bwgrid-rmpi.pbs Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

26 R Presentation (D. Junge) Using R for data analysis on cluster computers: distributed computing and statistical modeling with MPI and Rmpi on bwgrid D. Junge, MZES, University of Mannheim Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

27 Conclusion Discussion of topics for further meetings Date Location Main Focus User Contribution MA Matlab & Matlab Clones A. Uhlendorff MA Statistics Software R D. Junge Nov 2010 HD Dec 2010 HD Jan 2011 HD Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/ / 27

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. September 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 25 Course

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24. November 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 17 Course

More information

bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30.

bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30. bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30. January 2013 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2012/2013 1 / 23 Course

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 5.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 5. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 5. October 2011 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2011/12 1 / 21 Course

More information

bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30.

bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30. bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30. January 2013 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2012/2013 1 / 23 Course

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 19.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 19. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 19. January 2011 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 33 Course

More information

How To Use Bwgrid

How To Use Bwgrid bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 15. May 2013 Richling/Kredel (URZ/RUM) bwgrid Treff SS 2013 1 / 22 Course Organization

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 11.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 11. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 11. July 2012 Richling/Kredel (URZ/RUM) bwgrid Treff SS 2012 1 / 21 Course Organization

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

Workshop Agenda Feb 25th 2015

Workshop Agenda Feb 25th 2015 Workshop Agenda Feb 25th 2015 Time Presenter Title 09:30 T. König Talk bwhpc Concept & bwhpc-c5 - Federated User Support Activities 09:45 R. Walter Talk bwhpc architecture (bwunicluster, bwforcluster JUSTUS,

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

Advanced Techniques with Newton. Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011

Advanced Techniques with Newton. Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011 Advanced Techniques with Newton Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011 Workshop Goals Gain independence Executing your work Finding Information Fixing Problems Optimizing Effectiveness

More information

Using the Windows Cluster

Using the Windows Cluster Using the Windows Cluster Christian Terboven terboven@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster

More information

Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative

Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative Jan Wiebelitz Wolfgang Müller, Michael Brenner, Gabriele von Voigt Cracow Grid Workshop 2008, Cracow,

More information

An introduction to Fyrkat

An introduction to Fyrkat Cluster Computing May 25, 2011 How to get an account https://fyrkat.grid.aau.dk/useraccount How to get help https://fyrkat.grid.aau.dk/wiki What is a Cluster Anyway It is NOT something that does any of

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina High Performance Computing Facility Specifications, Policies and Usage Supercomputer Project Bibliotheca Alexandrina Bibliotheca Alexandrina 1/16 Topics Specifications Overview Site Policies Intel Compilers

More information

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz)

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team (support@nesi.org.nz) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

locuz.com HPC App Portal V2.0 DATASHEET

locuz.com HPC App Portal V2.0 DATASHEET locuz.com HPC App Portal V2.0 DATASHEET Ganana HPC App Portal makes it easier for users to run HPC applications without programming and for administrators to better manage their clusters. The web-based

More information

CS 3530 Operating Systems. L02 OS Intro Part 1 Dr. Ken Hoganson

CS 3530 Operating Systems. L02 OS Intro Part 1 Dr. Ken Hoganson CS 3530 Operating Systems L02 OS Intro Part 1 Dr. Ken Hoganson Chapter 1 Basic Concepts of Operating Systems Computer Systems A computer system consists of two basic types of components: Hardware components,

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

Solution for private cloud computing

Solution for private cloud computing The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details Use cases By scientist By HEP experiment System requirements and installation How to get it? 2 What

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

Lessons learned from parallel file system operation

Lessons learned from parallel file system operation Lessons learned from parallel file system operation Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing

More information

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Josef Pelikán Charles University in Prague, KSVI Department, Josef.Pelikan@mff.cuni.cz Abstract 1 Interconnect quality

More information

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Pierre-Yves Taunay Research Computing and Cyberinfrastructure 224A Computer Building The Pennsylvania State University University

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN 1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction

More information

22S:295 Seminar in Applied Statistics High Performance Computing in Statistics

22S:295 Seminar in Applied Statistics High Performance Computing in Statistics 22S:295 Seminar in Applied Statistics High Performance Computing in Statistics Luke Tierney Department of Statistics & Actuarial Science University of Iowa August 30, 2007 Luke Tierney (U. of Iowa) HPC

More information

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz)

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz) Center for e-research (eresearch@nesi.org.nz) Outline 1 About Us About CER and NeSI The CS Team Our Facilities 2 Key Concepts What is a Cluster Parallel Programming Shared Memory Distributed Memory 3 Using

More information

The XSEDE Global Federated File System (GFFS) - Breaking Down Barriers to Secure Resource Sharing

The XSEDE Global Federated File System (GFFS) - Breaking Down Barriers to Secure Resource Sharing December 19, 2013 The XSEDE Global Federated File System (GFFS) - Breaking Down Barriers to Secure Resource Sharing Andrew Grimshaw, University of Virginia Co-architect XSEDE The complexity of software

More information

Parallel Debugging with DDT

Parallel Debugging with DDT Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

Virtualization of a Cluster Batch System

Virtualization of a Cluster Batch System Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational

More information

Introduction to SDSC systems and data analytics software packages "

Introduction to SDSC systems and data analytics software packages Introduction to SDSC systems and data analytics software packages " Mahidhar Tatineni (mahidhar@sdsc.edu) SDSC Summer Institute August 05, 2013 Getting Started" System Access Logging in Linux/Mac Use available

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

HPC system startup manual (version 1.30)

HPC system startup manual (version 1.30) HPC system startup manual (version 1.30) Document change log Issue Date Change 1 12/1/2012 New document 2 10/22/2013 Added the information of supported OS 3 10/22/2013 Changed the example 1 for data download

More information

Parallel Computing with Mathematica UVACSE Short Course

Parallel Computing with Mathematica UVACSE Short Course UVACSE Short Course E Hall 1 1 University of Virginia Alliance for Computational Science and Engineering uvacse@virginia.edu October 8, 2014 (UVACSE) October 8, 2014 1 / 46 Outline 1 NX Client for Remote

More information

HPC Wales Skills Academy Course Catalogue 2015

HPC Wales Skills Academy Course Catalogue 2015 HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

icer Bioinformatics Support Fall 2011

icer Bioinformatics Support Fall 2011 icer Bioinformatics Support Fall 2011 John B. Johnston HPC Programmer Institute for Cyber Enabled Research 2011 Michigan State University Board of Trustees. Institute for Cyber Enabled Research (icer)

More information

Debugging with TotalView

Debugging with TotalView Tim Cramer 17.03.2015 IT Center der RWTH Aachen University Why to use a Debugger? If your program goes haywire, you may... ( wand (... buy a magic... read the source code again and again and...... enrich

More information

Parallel Options for R

Parallel Options for R Parallel Options for R Glenn K. Lockwood SDSC User Services glock@sdsc.edu Motivation "I just ran an intensive R script [on the supercomputer]. It's not much faster than my own machine." Motivation "I

More information

SLURM Workload Manager

SLURM Workload Manager SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux

More information

Auto-administration of glite-based

Auto-administration of glite-based 2nd Workshop on Software Services: Cloud Computing and Applications based on Software Services Timisoara, June 6-9, 2011 Auto-administration of glite-based grid sites Alexandru Stanciu, Bogdan Enciu, Gabriel

More information

Dynamic Extension of a Virtualized Cluster by using Cloud Resources CHEP 2012

Dynamic Extension of a Virtualized Cluster by using Cloud Resources CHEP 2012 Dynamic Extension of a Virtualized Cluster by using Cloud Resources CHEP 2012 Thomas Hauth,, Günter Quast IEKP KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz

More information

Altix Usage and Application Programming. Welcome and Introduction

Altix Usage and Application Programming. Welcome and Introduction Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang

More information

HDFS Cluster Installation Automation for TupleWare

HDFS Cluster Installation Automation for TupleWare HDFS Cluster Installation Automation for TupleWare Xinyi Lu Department of Computer Science Brown University Providence, RI 02912 xinyi_lu@brown.edu March 26, 2014 Abstract TupleWare[1] is a C++ Framework

More information

Automating Big Data Benchmarking for Different Architectures with ALOJA

Automating Big Data Benchmarking for Different Architectures with ALOJA www.bsc.es Jan 2016 Automating Big Data Benchmarking for Different Architectures with ALOJA Nicolas Poggi, Postdoc Researcher Agenda 1. Intro on Hadoop performance 1. Current scenario and problematic 2.

More information

Running applications on the Cray XC30 4/12/2015

Running applications on the Cray XC30 4/12/2015 Running applications on the Cray XC30 4/12/2015 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch jobs on compute nodes

More information

Program Grid and HPC5+ workshop

Program Grid and HPC5+ workshop Program Grid and HPC5+ workshop 24-30, Bahman 1391 Tuesday Wednesday 9.00-9.45 9.45-10.30 Break 11.00-11.45 11.45-12.30 Lunch 14.00-17.00 Workshop Rouhani Karimi MosalmanTabar Karimi G+MMT+K Opening IPM_Grid

More information

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu An introduction to compute resources in Biostatistics Chris Scheller schelcj@umich.edu 1. Resources 1. Hardware 2. Account Allocation 3. Storage 4. Software 2. Usage 1. Environment Modules 2. Tools 3.

More information

Rodrigo Fernandes de Mello, Evgueni Dodonov, José Augusto Andrade Filho

Rodrigo Fernandes de Mello, Evgueni Dodonov, José Augusto Andrade Filho Middleware for High Performance Computing Rodrigo Fernandes de Mello, Evgueni Dodonov, José Augusto Andrade Filho University of São Paulo São Carlos, Brazil {mello, eugeni, augustoa}@icmc.usp.br Outline

More information

Introduction to ANSYS Academic Research

Introduction to ANSYS Academic Research Introduction to ANSYS Academic Research bwgrid meeting University of Heidelberg 05.10.2011 Steinbuch Centre for Computing (SCC) Dr. Scientific Computing Services (SCS) Zirkel 2, Bldg. 20.21 76131 Karlsruhe,

More information

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical

More information

How to control Resource allocation on pseries multi MCM system

How to control Resource allocation on pseries multi MCM system How to control Resource allocation on pseries multi system Pascal Vezolle Deep Computing EMEA ATS-P.S.S.C/ Montpellier FRANCE Agenda AIX Resource Management Tools WorkLoad Manager (WLM) Affinity Services

More information

OpenMP & MPI CISC 879. Tristan Vanderbruggen & John Cavazos Dept of Computer & Information Sciences University of Delaware

OpenMP & MPI CISC 879. Tristan Vanderbruggen & John Cavazos Dept of Computer & Information Sciences University of Delaware OpenMP & MPI CISC 879 Tristan Vanderbruggen & John Cavazos Dept of Computer & Information Sciences University of Delaware 1 Lecture Overview Introduction OpenMP MPI Model Language extension: directives-based

More information

Solution for private cloud computing

Solution for private cloud computing The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details System requirements and installation How to get it? 2 What is CC1? The CC1 system is a complete solution

More information

Berkeley Research Computing. Town Hall Meeting Savio Overview

Berkeley Research Computing. Town Hall Meeting Savio Overview Berkeley Research Computing Town Hall Meeting Savio Overview SAVIO - The Need Has Been Stated Inception and design was based on a specific need articulated by Eliot Quataert and nine other faculty: Dear

More information

AFS Usage and Backups using TiBS at Fermilab. Presented by Kevin Hill

AFS Usage and Backups using TiBS at Fermilab. Presented by Kevin Hill AFS Usage and Backups using TiBS at Fermilab Presented by Kevin Hill Agenda History and current usage of AFS at Fermilab About Teradactyl How TiBS (True Incremental Backup System) and TeraMerge works AFS

More information

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 What is ACENET? What is ACENET? Shared regional resource for... high-performance computing (HPC) remote collaboration

More information

Virtualization Infrastructure at Karlsruhe

Virtualization Infrastructure at Karlsruhe Virtualization Infrastructure at Karlsruhe HEPiX Fall 2007 Volker Buege 1),2), Ariel Garcia 1), Marcus Hardt 1), Fabian Kulla 1),Marcel Kunze 1), Oliver Oberst 1),2), Günter Quast 2), Christophe Saout

More information

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu Grid 101 Josh Hegie grid@unr.edu http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging

More information

WinBioinfTools: Bioinformatics Tools for Windows Cluster. Done By: Hisham Adel Mohamed

WinBioinfTools: Bioinformatics Tools for Windows Cluster. Done By: Hisham Adel Mohamed WinBioinfTools: Bioinformatics Tools for Windows Cluster Done By: Hisham Adel Mohamed Objective Implement and Modify Bioinformatics Tools To run under Windows Cluster Project : Research Project between

More information

MPI / ClusterTools Update and Plans

MPI / ClusterTools Update and Plans HPC Technical Training Seminar July 7, 2008 October 26, 2007 2 nd HLRS Parallel Tools Workshop Sun HPC ClusterTools 7+: A Binary Distribution of Open MPI MPI / ClusterTools Update and Plans Len Wisniewski

More information

OPERATING SYSTEM SERVICES

OPERATING SYSTEM SERVICES OPERATING SYSTEM SERVICES USER INTERFACE Command line interface(cli):uses text commands and a method for entering them Batch interface(bi):commands and directives to control those commands are entered

More information

High Performance Computing Cluster Quick Reference User Guide

High Performance Computing Cluster Quick Reference User Guide High Performance Computing Cluster Quick Reference User Guide Base Operating System: Redhat(TM) / Scientific Linux 5.5 with Alces HPC Software Stack Copyright 2011 Alces Software Ltd All Rights Reserved

More information

Introduction to Sun Grid Engine (SGE)

Introduction to Sun Grid Engine (SGE) Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems

More information

SURFsara HPC Cloud Workshop

SURFsara HPC Cloud Workshop SURFsara HPC Cloud Workshop www.cloud.sara.nl Tutorial 2014-06-11 UvA HPC and Big Data Course June 2014 Anatoli Danezi, Markus van Dijk cloud-support@surfsara.nl Agenda Introduction and Overview (current

More information

Anwendungsintegration und Workflows mit UNICORE 6

Anwendungsintegration und Workflows mit UNICORE 6 Mitglied der Helmholtz-Gemeinschaft Anwendungsintegration und Workflows mit UNICORE 6 Bernd Schuller und UNICORE-Team Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH 26. November 2009 D-Grid

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

270-411. Easy CramBible Lab DEMO ONLY VERSION 270-411. Partner Certification for IAM:Foundation Exam 41

270-411. Easy CramBible Lab DEMO ONLY VERSION 270-411. Partner Certification for IAM:Foundation Exam 41 Easy CramBible Lab 270-411 Partner Certification for IAM:Foundation Exam 41 ** Single-user License ** This copy can be only used by yourself for educational purposes Web: http://www.crambible.com/ E-mail:

More information

Status and Integration of AP2 Monitoring and Online Steering

Status and Integration of AP2 Monitoring and Online Steering Status and Integration of AP2 Monitoring and Online Steering Daniel Lorenz - University of Siegen Stefan Borovac, Markus Mechtel - University of Wuppertal Ralph Müller-Pfefferkorn Technische Universität

More information

ArcGIS for Server: Administrative Scripting and Automation

ArcGIS for Server: Administrative Scripting and Automation ArcGIS for Server: Administrative Scripting and Automation Shreyas Shinde Ranjit Iyer Esri UC 2014 Technical Workshop Agenda Introduction to server administration Command line tools ArcGIS Server Manager

More information

Integrated Open-Source Geophysical Processing and Visualization

Integrated Open-Source Geophysical Processing and Visualization Integrated Open-Source Geophysical Processing and Visualization Glenn Chubak* University of Saskatchewan, Saskatoon, Saskatchewan, Canada gdc178@mail.usask.ca and Igor Morozov University of Saskatchewan,

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

Cloud Computing through Virtualization and HPC technologies

Cloud Computing through Virtualization and HPC technologies Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC

More information

Kriterien für ein PetaFlop System

Kriterien für ein PetaFlop System Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working

More information

WEBSPHERE APPLICATION SERVER ADMIN V8.5 (on Linux and Windows) WITH REAL-TIME CONCEPTS & REAL-TIME PROJECT

WEBSPHERE APPLICATION SERVER ADMIN V8.5 (on Linux and Windows) WITH REAL-TIME CONCEPTS & REAL-TIME PROJECT WEBSPHERE APPLICATION SERVER ADMIN V8.5 (on Linux and Windows) WITH REAL-TIME CONCEPTS & REAL-TIME PROJECT Faculty Name Experience Course Duration Madhav (Certified Middleware Professional) Certified on

More information

Technical Computing Suite Job Management Software

Technical Computing Suite Job Management Software Technical Computing Suite Job Management Software Toshiaki Mikamo Fujitsu Limited Supercomputer PRIMEHPC FX10 PRIMERGY x86 cluster Outline System Configuration and Software Stack Features The major functions

More information

Introduction to Supercomputing with Janus

Introduction to Supercomputing with Janus Introduction to Supercomputing with Janus Shelley Knuth shelley.knuth@colorado.edu Peter Ruprecht peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Who is CU Research Computing? What is a supercomputer?

More information

ITG Software Engineering

ITG Software Engineering IBM WebSphere Administration 8.5 Course ID: Page 1 Last Updated 12/15/2014 WebSphere Administration 8.5 Course Overview: This 5 Day course will cover the administration and configuration of WebSphere 8.5.

More information

SURFsara HPC Cloud Workshop

SURFsara HPC Cloud Workshop SURFsara HPC Cloud Workshop doc.hpccloud.surfsara.nl UvA workshop 2016-01-25 UvA HPC Course Jan 2016 Anatoli Danezi, Markus van Dijk cloud-support@surfsara.nl Agenda Introduction and Overview (current

More information

ConcourseSuite 7.0. Installation, Setup, Maintenance, and Upgrade

ConcourseSuite 7.0. Installation, Setup, Maintenance, and Upgrade ConcourseSuite 7.0 Installation, Setup, Maintenance, and Upgrade Introduction 4 Welcome to ConcourseSuite Legal Notice Requirements 5 Pick your software requirements Pick your hardware requirements Workload

More information

The CNMS Computer Cluster

The CNMS Computer Cluster The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

www.thinkparq.com www.beegfs.com

www.thinkparq.com www.beegfs.com www.thinkparq.com www.beegfs.com KEY ASPECTS Maximum Flexibility Maximum Scalability BeeGFS supports a wide range of Linux distributions such as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a

More information