GC3: Grid Computing Competence Center Cluster computing, I Batch-queueing systems

Size: px
Start display at page:

Download "GC3: Grid Computing Competence Center Cluster computing, I Batch-queueing systems"

Transcription

1 GC3: Grid Computing Competence Center Cluster computing, I Batch-queueing systems Riccardo Murri, Sergio Maffioletti Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Oct. 23, 22

2 Today s topic purpose { }} { Batch job processing } clusters {{ } HW architecture

3 What is a cluster? I ssh username@frontend.node internet frontend.node.uzh.ch local network fabric compute 0 0.local compute 0 1.local compute 0 27.local A cluster is a group of computers with a direct network interconnect, centralized management, and distributed execution facilites.

4 What is a cluster? II Centralized: Distributed: Authorization and Authentication Shared filesystem Application execution and management Execution of jobs Multiple units of the same parallel job may reside on separate resources

5 What is an HPC cluster? A cluster is a group of computers with a direct network interconnect, centralized management and distributed execution facilites. An HPC cluster is a cluster with a fast local network interconnect, specialized for execution of parallel distributed-memory programs. A supercomputer is (currently) a very large HPC cluster with a very fast local network interconnect.

6 What s batch job processing? Asynchronous execution of shell commands. Wikipedia: Asynchronous actions are actions executed in a non-blocking scheme, allowing the main program flow to continue processing.

7 Lifecycle of a batch job 1. A command to run is submitted to the batch processing system 2. The batch job scheduler selects appropriate resources to run the job 3. The resource manager executes the job 4. Users monitor the job execution state

8 Functional components of a batch job system Resource Manager Monitors compute infrastructure, launches and supervise jobs, cleans up after termination. Job manager / scheduler Allocates resources and time slots (scheduling) Workload Manager Policy and orchestration at job collection level: fair share, workflow orchestration, QoS, SLA, etc. Reference: O. Richard, Batch Scheduler and File Management, The third workshop of the INRIA-Illinois joint-laboratory on Petascale Computing, June 21-24, 20, Bordeaux, France

9 Architecture of a batch job system server frontend client 1. submit job 4. monitor execution scheduler 2. allocate resources resource manager machine status monitoring 3. start job master monitor monitor 4. monitor execution monitor job launch & execution compute 0 0.local compute 0 1.local compute 0 27.local

10 Grid Engine Sun Grid Engine (GE) is a batch-queuing system produced by Sun Microcomputers; made open-source in 20. After acquisition by Oracle, the product forked: Open Grid Scheduler (OGS) and Son of Grid Engine (SGE), independent open-source versions. Oracle Grid Engine, commercial and focused on enterprise technical computing. Univa Grid Engine is a commercial-only version, developed by the core SGE engineer team from Sun. Used on UZH main HPC cluster Schroedinger.

11 GE architecture, I sge qmaster Runs on master node Accepts client requests (job submission, job/host state inspection) Schedules jobs on compute nodes (formerly separate sge schedd process) Client programs qhost, qsub, qstat Run by user on submit node Clients for sge qmaster Master daemon has a list of authorized submit nodes

12 GE architecture, II sge execd Runs on every compute node Accepts job start requests from sge qmaster Monitors node status (load average, free memory, etc.) and reports back to sge qmaster sge shepherd Spawned by sge execd when starting a job Monitors the execution of a single job

13 GE architecture, III ge_master frontend qsub 1. submit job scheduler 2. allocate resources qstat 4. monitor execution resource manager machine status monitoring 3. start job master ge_execd ge_execd 4. monitor execution ge_execd ge_shepherd compute 0 0.local compute 0 1.local compute 0 27.local

14 Lifecycle of a Job: user perspective 1. Prepare job script (normally shell script) 2. Define resource requirements 3. Submit job and record jobid 4. Monitor status of job (using JobID) 5. When done, inspect results 6. Otherwise check logs

15 Prepare job script #!/bin/bash MZXMLSEARCH="./MzXML2Search" ${MZXMLSEARCH} -dta ${MZXML_NAME}.mzXML if [! $? -eq 0 ]; then echo "[FATAL]" exit $1 fi

16 Submit job and monitor using jobid # qsub test.sh 534.localhost # qstat 534 Job id Name S Queue localhost test.sh R default

17 Lifecycle of a Job: system perspective 1. Job submission form DRM client 2. Resource Manager stores job in a queue Queue selected inspecting DRM policies and job s description 3. Scheduler starts scheduling cycle Collects resource information from exec hosts Inspects jobs in queues Applies scheduling policies to sort jobs in queues Sends run request to Resource Manager 4. Resource Manager sends job to exec host to run 5. Exec host receives payload and runs it Job executed using user credentials Periodically report to Resource Manager resource utilization When job finished, reports to Resource Manager 6. Resource Manager updates job s state

18 Job lifecycle

19 Implementation issues I/O how to provide input data to the job and collect output data from it Scheduling When should the job start? Resource allocation On what computer(s) should it run? How to cope with heterogeneous resource pools? Job monitoring and accounting What usage records should be collected and stored?

20 I/O management in HPC clusters Two main ways: 1. Shared file system 2. Data staging Reference: O. Richard, Batch Scheduler and File Management, The third workshop of the INRIA-Illinois joint-laboratory on Petascale Computing, June 21-24, 20, Bordeaux, France

21 Shared file systems Used on most cluster systems Parallel filesystem (e.g., Lustre, GPFS, PVFS, NFSv4.1,... ) for performance and scalability Often separate filesystems based on features: a filesystem for persistent / longer-term data (e.g., /home) another one for ephemeral I/O (deleted after the job has finished running) responsibility is on the user to move data into the appropriate filesystem Easy to use: no difference with local I/O model.

22 Data staging Job data requirements are identified and provided by user in submitted script. Stage-in Input Files are transfered to local disk of compute nodes before job start. Stage-out Output Files are transfered from nodes to mass storage after execution. Nowadays, rarely used on cluster, mainly used in Grid context

23 Scheduling Long-term scheduler. Jobs may last hours, days, even months! HPC job scheduling is usually non-preemptive. Compute resources are fully utilized, there s little room for sharing. Common scheduling algorithms are usually variations of FCFS or priority-based scheduling.

24 Scheduling: terminology Turnaround time The total time elapsed from the moment a job is submitted to the moment it terminates running. Wait time The time elapsed from submission until a job actually starts running. Wall-time The time elapsed from job start to end. (Abbreviation of wall-clock time.) CPU time The total time spent by CPU(s) executing a job program s code.

25 Scheduling: FCFS, I First come, first served Job requests are kept in a queue. New job requests (submissions) append to the back of the queue. Each time a suitable execution slot is freed, the job at the front of the queue is run.

26 Scheduling: FCFS, II Issues with bare FCFS: 1. Average waiting time might be long: e.g., a user submits a large number of very long jobs; other users have to wait a lot in order to have shorter jobs running. Solutions: separate queues, backfill, priority-based scheduling 2. When there are parallel jobs spanning multiple execution units, the scheduler has to keep some nodes idle to allocate enough resources. Solutions: backfill

27 Scheduling: separate queues Create separate job queues. Submission queue may be explicitly chosen by user, or selected by scheduler based on job characteristics. Each queue is associated with a different set of execution nodes. Each queue has different run features e.g., different maximum run time

28 Scheduling: backfill Jobs jump ahead in the queue and are executed on reserved nodes if they will be finished by the time the job holding the reservation is scheduled to start. Requires job duration to be known in advance! Image source: ballisti/computer topics/lsf/admin/04-tunin.htm

29 Scheduling: SFJ, I Shortest job first Job queue is sorted according to duration: shortest jobs are moved to the front. Requires job duration to be known in advance!

30 Scheduling: SFJ, II If all jobs are known in advance, it can be proved to deliver the optimal average wait time. Otherwise, may delay long jobs indefinitely: At 10 am, Job X with expected runtime 4 hours is submitted; it has to wait 2 hours in the queue. At 11 am, 10 jobs of 2 hours runtime are submitted; they jump ahead in the queue and delay job X by 20 hours. At 12 am, 5 more jobs of 1 hour runtime are submitted; they delay job X by another 5 hours. Solution: add deadline factor: take into account the time a job has already spent waiting in the queue.

31 Priority-based scheduling Sort job queues according to some priority function The priority function is usually a weighted sum of various contributions, e.g.: Requested run time Number of processors Wait time in queue Recent usage by same user/group/department (fair share) Administrator-set QoS Reference: 1jobprioritization.php

32 Fair-share scheduling Fair-share prioritization assigns higher priorities to users/groups/etc. that have not used all of their resource quota (usually expressed in CPU time). Important parameters in defining a fair-share policy: window length: how much historical information is kept and used for calculating resource usage interval: how often is resource utilization computed decay: weights applied to resource usage in the past (e.g., 2 hours of CPU time one week ago might weigh less than 2 hours of CPU time today) Reference: http: //

33 Resource allocation, I Resource allocation is the act of selecting execution units out of the available pool for running a job. Over time, clusters tend to grow inhomogeneously: new nodes are added, that are different from the older ones. Jobs are different in computational and hardware requirements, e.g.: short jobs vs long-running jobs large memory hence less jobs fit in a single multi-core node I/O bound hence fast filesystem needed

34 Resource allocation, II General resource allocation algorithm (match-making): 1. user specifies resource requirements during job submission 2. filtering: scheduler filters resources based on evaluation of a boolean formula usually, logical AND of resource requirements 3. ranking: matching resources are sorted and the first-ranking one gets the job Normally the filtering and ranking functions are fixed or can only be modified by the cluster admin. A notable exception is the Condor batch system, which allows users to specify arbitrary filtering and ranking functions.

35 Example: resource requirements in SGE Grid Engine allows specifying resource requirements within a job script. #$/bin/bash #$ -q all.q # queue name #$ -l s_vmem=300m # memory #$ -l s_rt=60 # walltime #$ -l gpu=1 # require 1 GPGPU #$ -pe mpich 32 # CPU cores MZXMLSEARCH="./MzXML2Search"... (Note that you write s rt=60 but the system understands s rt >= 60 for the purpose of filtering.)

36 Condor condor_agent condor_master condor_submit batch system server condor_resource condor_resource batch system server condor_resource batch system server local 1Gb/s ethernet network local 1Gb/s ethernet network local 1Gb/s ethernet network compute node 1 compute node 2 compute node N compute node 1 compute node 2 compute node N compute node 1 compute node 2 compute node N

37 Condor overview Agents (client-side software) and Resources (cluster-side software) advertise their requests and capabilities to the Condor Master. The Master performs match-making between Agents requests and Resources offerings. An Agent sends its computational job directly to the matching Resource. Reference: Thain, D., Tannenbaum, T. and Livny, M. (2005): Distributed computing in practice: the Condor experience. Concurrency and Computation: Practice and Experience, 17:

38 What is matchmaking?

39 Matchmaking, I Same idea in Condor, except the schema is not fixed. Agents and Resources report their requests and offers using the ClassAd format (an enriched key=value format). No prescribed schema, hence a Resource is free to advertise any interesting feature it has, and to represent it in any way that fits the key=value model.

40 Matchmaking, II 1. Agents specify a Requirements constraint: a boolean expression that can use any value from the Agents own (self) ClassAd or the Resource s (other). 2a. Resources whose offered ClassAd does not satisfy the Requirements constraint are discarded. 2b. Conversely, if the Agents ClassAd does not satisfy the Resource Requirements, the Resource is discarded. 3. Surviving Resources are sorted according to the value of the Rank expression in the Agent s ClassAd, and their list is returned to the Agent.

41 Example: Job ClassAd Select 64-bit Linux hosts, and sort them preferring hosts with larger memory and CPU speed. Requirements = Arch=="x86_64" && OpSys == "LINUX" Rank = TARGET.Memory + TARGET.Mips Reference: http: //research.cs.wisc.edu/condor/manual/v6.4/4 1Condor s ClassAd.html

42 Example: Resource ClassAd A complex access policy, giving priority to users from the owner research group, then other friend users, and then the rest... Friend = Owner == "tannenba" ResearchGroup = (Owner == "jbasney" Owner == "raman") Trusted = Owner!= "rival" Requirements = Trusted && ( ResearchGroup LoadAvg < 0.3 && KeyboardIdle > 15*60 ) Rank = Friend + ResearchGroup*10 Resource ClassAds specify an access/usage policy for the resource.

43 Resource allocation, III Problem: How do you submit a job that requires 200GB of local scratch space? Or 16 cores in a single node?

44 Resource allocation, IV The names and types of resource requirements vary from cluster to cluster Defaults change with batch system software release Custom requirements depend on local system administrator Job management software must adapted to the local cluster When you get access to a new cluster, you must rewrite a large portion of your submission scripts. Applies to Condor as well: since ClassAds are free-form, defining what attributes can be used and relied upon is an organizational problem.

45 All these job management systems are based on a push model (you send the job to an execution cluster). Is there conversely a pull model?

Introduction to the SGE/OGS batch-queuing system

Introduction to the SGE/OGS batch-queuing system Grid Computing Competence Center Introduction to the SGE/OGS batch-queuing system Riccardo Murri Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Oct. 6, 2011 The basic

More information

Grid Engine Training Introduction

Grid Engine Training Introduction Grid Engine Training Jordi Blasco (jordi.blasco@xrqtc.org) 26-03-2012 Agenda 1 How it works? 2 History Current status future About the Grid Engine version of this training Documentation 3 Grid Engine internals

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

159.735. Final Report. Cluster Scheduling. Submitted by: Priti Lohani 04244354

159.735. Final Report. Cluster Scheduling. Submitted by: Priti Lohani 04244354 159.735 Final Report Cluster Scheduling Submitted by: Priti Lohani 04244354 1 Table of contents: 159.735... 1 Final Report... 1 Cluster Scheduling... 1 Table of contents:... 2 1. Introduction:... 3 1.1

More information

Cloud Computing. Up until now

Cloud Computing. Up until now Cloud Computing Lecture 3 Grid Schedulers: Condor, Sun Grid Engine 2010-2011 Introduction. Up until now Definition of Cloud Computing. Grid Computing: Schedulers: Condor architecture. 1 Summary Condor:

More information

KISTI Supercomputer TACHYON Scheduling scheme & Sun Grid Engine

KISTI Supercomputer TACHYON Scheduling scheme & Sun Grid Engine KISTI Supercomputer TACHYON Scheduling scheme & Sun Grid Engine 슈퍼컴퓨팅인프라지원실 윤 준 원 (jwyoon@kisti.re.kr) 2014.07.15 Scheduling (batch job processing) Distributed resource management Features of job schedulers

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

The Moab Scheduler. Dan Mazur, McGill HPC daniel.mazur@mcgill.ca Aug 23, 2013

The Moab Scheduler. Dan Mazur, McGill HPC daniel.mazur@mcgill.ca Aug 23, 2013 The Moab Scheduler Dan Mazur, McGill HPC daniel.mazur@mcgill.ca Aug 23, 2013 1 Outline Fair Resource Sharing Fairness Priority Maximizing resource usage MAXPS fairness policy Minimizing queue times Should

More information

Batch Systems. provide a mechanism for submitting, launching, and tracking jobs on a shared resource

Batch Systems. provide a mechanism for submitting, launching, and tracking jobs on a shared resource PBS INTERNALS PBS & TORQUE PBS (Portable Batch System)-software system for managing system resources on workstations, SMP systems, MPPs and vector computers. It was based on Network Queuing System (NQS)

More information

- Behind The Cloud -

- Behind The Cloud - - Behind The Cloud - Infrastructure and Technologies used for Cloud Computing Alexander Huemer, 0025380 Johann Taferl, 0320039 Florian Landolt, 0420673 Seminar aus Informatik, University of Salzburg Overview

More information

SLURM Workload Manager

SLURM Workload Manager SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

Grid Engine 6. Policies. BioTeam Inc. info@bioteam.net

Grid Engine 6. Policies. BioTeam Inc. info@bioteam.net Grid Engine 6 Policies BioTeam Inc. info@bioteam.net This module covers High level policy config Reservations Backfilling Resource Quotas Advanced Reservation Job Submission Verification We ll be talking

More information

LSKA 2010 Survey Report Job Scheduler

LSKA 2010 Survey Report Job Scheduler LSKA 2010 Survey Report Job Scheduler Graduate Institute of Communication Engineering {r98942067, r98942112}@ntu.edu.tw March 31, 2010 1. Motivation Recently, the computing becomes much more complex. However,

More information

Introduction to Sun Grid Engine (SGE)

Introduction to Sun Grid Engine (SGE) Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems

More information

A High Performance Computing Scheduling and Resource Management Primer

A High Performance Computing Scheduling and Resource Management Primer LLNL-TR-652476 A High Performance Computing Scheduling and Resource Management Primer D. H. Ahn, J. E. Garlick, M. A. Grondona, D. A. Lipari, R. R. Springmeyer March 31, 2014 Disclaimer This document was

More information

Batch Job Analysis to Improve the Success Rate in HPC

Batch Job Analysis to Improve the Success Rate in HPC Batch Job Analysis to Improve the Success Rate in HPC 1 JunWeon Yoon, 2 TaeYoung Hong, 3 ChanYeol Park, 4 HeonChang Yu 1, First Author KISTI and Korea University, jwyoon@kisti.re.kr 2,3, KISTI,tyhong@kisti.re.kr,chan@kisti.re.kr

More information

An Oracle White Paper August 2010. Beginner's Guide to Oracle Grid Engine 6.2

An Oracle White Paper August 2010. Beginner's Guide to Oracle Grid Engine 6.2 An Oracle White Paper August 2010 Beginner's Guide to Oracle Grid Engine 6.2 Executive Overview...1 Introduction...1 Chapter 1: Introduction to Oracle Grid Engine...3 Oracle Grid Engine Jobs...3 Oracle

More information

Optimizing Shared Resource Contention in HPC Clusters

Optimizing Shared Resource Contention in HPC Clusters Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs

More information

GRID Computing: CAS Style

GRID Computing: CAS Style CS4CC3 Advanced Operating Systems Architectures Laboratory 7 GRID Computing: CAS Style campus trunk C.I.S. router "birkhoff" server The CAS Grid Computer 100BT ethernet node 1 "gigabyte" Ethernet switch

More information

Martinos Center Compute Clusters

Martinos Center Compute Clusters Intro What are the compute clusters How to gain access Housekeeping Usage Log In Submitting Jobs Queues Request CPUs/vmem Email Status I/O Interactive Dependencies Daisy Chain Wrapper Script In Progress

More information

CONDOR And The GRID. By Karthik Ram Venkataramani Department of Computer Science University at Buffalo kv8@cse.buffalo.edu

CONDOR And The GRID. By Karthik Ram Venkataramani Department of Computer Science University at Buffalo kv8@cse.buffalo.edu CONDOR And The GRID By Karthik Ram Venkataramani Department of Computer Science University at Buffalo kv8@cse.buffalo.edu Abstract Origination of the Condor Project Condor as Grid Middleware Condor working

More information

Grid Engine Administration. Overview

Grid Engine Administration. Overview Grid Engine Administration Overview This module covers Grid Problem Types How it works Distributed Resource Management Grid Engine 6 Variants Grid Engine Scheduling Grid Engine 6 Architecture Grid Problem

More information

Using Parallel Computing to Run Multiple Jobs

Using Parallel Computing to Run Multiple Jobs Beowulf Training Using Parallel Computing to Run Multiple Jobs Jeff Linderoth August 5, 2003 August 5, 2003 Beowulf Training Running Multiple Jobs Slide 1 Outline Introduction to Scheduling Software The

More information

Chapter 2: Getting Started

Chapter 2: Getting Started Chapter 2: Getting Started Once Partek Flow is installed, Chapter 2 will take the user to the next stage and describes the user interface and, of note, defines a number of terms required to understand

More information

Running applications on the Cray XC30 4/12/2015

Running applications on the Cray XC30 4/12/2015 Running applications on the Cray XC30 4/12/2015 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch jobs on compute nodes

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

Quick Tutorial for Portable Batch System (PBS)

Quick Tutorial for Portable Batch System (PBS) Quick Tutorial for Portable Batch System (PBS) The Portable Batch System (PBS) system is designed to manage the distribution of batch jobs and interactive sessions across the available nodes in the cluster.

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

Grid Engine Users Guide. 2011.11p1 Edition

Grid Engine Users Guide. 2011.11p1 Edition Grid Engine Users Guide 2011.11p1 Edition Grid Engine Users Guide : 2011.11p1 Edition Published Nov 01 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the

More information

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es)

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es) Microsoft HPC V 1.0 José M. Cámara (checam@ubu.es) Introduction Microsoft High Performance Computing Package addresses computing power from a rather different approach. It is mainly focused on commodity

More information

Until now: tl;dr: - submit a job to the scheduler

Until now: tl;dr: - submit a job to the scheduler Until now: - access the cluster copy data to/from the cluster create parallel software compile code and use optimized libraries how to run the software on the full cluster tl;dr: - submit a job to the

More information

How to Run Parallel Jobs Efficiently

How to Run Parallel Jobs Efficiently How to Run Parallel Jobs Efficiently Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education May 9, 2013 1 The big picture: running parallel jobs on Hoffman2

More information

BEGINNER'S GUIDE TO SUN GRID ENGINE 6.2

BEGINNER'S GUIDE TO SUN GRID ENGINE 6.2 BEGINNER'S GUIDE TO SUN GRID ENGINE 6.2 Installation and Configuration White Paper September 2008 Abstract This white paper will walk through basic installation and configuration of Sun Grid Engine 6.2,

More information

How To Run A Tompouce Cluster On An Ipra (Inria) 2.5.5 (Sun) 2 (Sun Geserade) 2-5.4 (Sun-Ge) 2/5.2 (

How To Run A Tompouce Cluster On An Ipra (Inria) 2.5.5 (Sun) 2 (Sun Geserade) 2-5.4 (Sun-Ge) 2/5.2 ( Running Hadoop and Stratosphere jobs on TomPouce cluster 16 October 2013 TomPouce cluster TomPouce is a cluster of 20 calcula@on nodes = 240 cores Located in the Inria Turing building (École Polytechnique)

More information

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina High Performance Computing Facility Specifications, Policies and Usage Supercomputer Project Bibliotheca Alexandrina Bibliotheca Alexandrina 1/16 Topics Specifications Overview Site Policies Intel Compilers

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

Provisioning and Resource Management at Large Scale (Kadeploy and OAR)

Provisioning and Resource Management at Large Scale (Kadeploy and OAR) Provisioning and Resource Management at Large Scale (Kadeploy and OAR) Olivier Richard Laboratoire d Informatique de Grenoble (LIG) Projet INRIA Mescal 31 octobre 2007 Olivier Richard ( Laboratoire d Informatique

More information

Mitglied der Helmholtz-Gemeinschaft. System monitoring with LLview and the Parallel Tools Platform

Mitglied der Helmholtz-Gemeinschaft. System monitoring with LLview and the Parallel Tools Platform Mitglied der Helmholtz-Gemeinschaft System monitoring with LLview and the Parallel Tools Platform November 25, 2014 Carsten Karbach Content 1 LLview 2 Parallel Tools Platform (PTP) 3 Latest features 4

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Miami University RedHawk Cluster Working with batch jobs on the Cluster Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

Condor and the Grid Authors: D. Thain, T. Tannenbaum, and M. Livny. Condor Provide. Why Condor? Condor Kernel. The Philosophy of Flexibility

Condor and the Grid Authors: D. Thain, T. Tannenbaum, and M. Livny. Condor Provide. Why Condor? Condor Kernel. The Philosophy of Flexibility Condor and the Grid Authors: D. Thain, T. Tannenbaum, and M. Livny Presenter: Ibrahim H Suslu What is Condor? Specialized job and resource management system (RMS) for compute intensive jobs 1. User submit

More information

SGE Roll: Users Guide. Version @VERSION@ Edition

SGE Roll: Users Guide. Version @VERSION@ Edition SGE Roll: Users Guide Version @VERSION@ Edition SGE Roll: Users Guide : Version @VERSION@ Edition Published Aug 2006 Copyright 2006 UC Regents, Scalable Systems Table of Contents Preface...i 1. Requirements...1

More information

Experiment design and administration for computer clusters for SAT-solvers (EDACC) system description

Experiment design and administration for computer clusters for SAT-solvers (EDACC) system description Journal on Satisfiability, Boolean Modeling and Computation 7 (2010) 77 82 Experiment design and administration for computer clusters for SAT-solvers (EDACC) system description Adrian Balint Daniel Gall

More information

MSU Tier 3 Usage and Troubleshooting. James Koll

MSU Tier 3 Usage and Troubleshooting. James Koll MSU Tier 3 Usage and Troubleshooting James Koll Overview Dedicated computing for MSU ATLAS members Flexible user environment ~500 job slots of various configurations ~150 TB disk space 2 Condor commands

More information

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu Grid 101 Josh Hegie grid@unr.edu http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging

More information

Status and Evolution of ATLAS Workload Management System PanDA

Status and Evolution of ATLAS Workload Management System PanDA Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The

More information

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum Scheduling Yücel Saygın These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum 1 Scheduling Introduction to Scheduling (1) Bursts of CPU usage alternate with periods

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

CPU Scheduling Outline

CPU Scheduling Outline CPU Scheduling Outline What is scheduling in the OS? What are common scheduling criteria? How to evaluate scheduling algorithms? What are common scheduling algorithms? How is thread scheduling different

More information

Cloud Computing. Lectures 3 and 4 Grid Schedulers: Condor 2014-2015

Cloud Computing. Lectures 3 and 4 Grid Schedulers: Condor 2014-2015 Cloud Computing Lectures 3 and 4 Grid Schedulers: Condor 2014-2015 Up until now Introduction. Definition of Cloud Computing. Grid Computing: Schedulers: Condor architecture. Summary Condor: user perspective.

More information

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput Import Settings: Base Settings: Brownstone Default Highest Answer Letter: D Multiple Keywords in Same Paragraph: No Chapter: Chapter 5 Multiple Choice 1. Which of the following is true of cooperative scheduling?

More information

Job Scheduling with Moab Cluster Suite

Job Scheduling with Moab Cluster Suite Job Scheduling with Moab Cluster Suite IBM High Performance Computing February 2010 Y. Joanna Wong, Ph.D. yjw@us.ibm.com 2/22/2010 Workload Manager Torque Source: Adaptive Computing 2 Some terminology..

More information

Scheduling Algorithms for Dynamic Workload

Scheduling Algorithms for Dynamic Workload Managed by Scheduling Algorithms for Dynamic Workload Dalibor Klusáček (MU) Hana Rudová (MU) Ranieri Baraglia (CNR - ISTI) Gabriele Capannini (CNR - ISTI) Marco Pasquali (CNR ISTI) Outline Motivation &

More information

Job scheduler details

Job scheduler details Job scheduler details Advanced Computing Center for Research & Education (ACCRE) Job scheduler details 1 / 25 Outline 1 Batch queue system overview 2 Torque and Moab 3 Submitting jobs (ACCRE) Job scheduler

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

Sun Grid Engine, a new scheduler for EGEE

Sun Grid Engine, a new scheduler for EGEE Sun Grid Engine, a new scheduler for EGEE G. Borges, M. David, J. Gomes, J. Lopez, P. Rey, A. Simon, C. Fernandez, D. Kant, K. M. Sephton IBERGRID Conference Santiago de Compostela, Spain 14, 15, 16 May

More information

The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland

The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which

More information

The Managed computation Factory and Its Application to EGEE

The Managed computation Factory and Its Application to EGEE The Managed Computation and its Application to EGEE and OSG Requirements Ian Foster, Kate Keahey, Carl Kesselman, Stuart Martin, Mats Rynge, Gurmeet Singh DRAFT of June 19, 2005 Abstract An important model

More information

Overlapping Data Transfer With Application Execution on Clusters

Overlapping Data Transfer With Application Execution on Clusters Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm reid@cs.toronto.edu stumm@eecg.toronto.edu Department of Computer Science Department of Electrical and Computer

More information

Process Scheduling CS 241. February 24, 2012. Copyright University of Illinois CS 241 Staff

Process Scheduling CS 241. February 24, 2012. Copyright University of Illinois CS 241 Staff Process Scheduling CS 241 February 24, 2012 Copyright University of Illinois CS 241 Staff 1 Announcements Mid-semester feedback survey (linked off web page) MP4 due Friday (not Tuesday) Midterm Next Tuesday,

More information

Grid Engine 6. Troubleshooting. BioTeam Inc. info@bioteam.net

Grid Engine 6. Troubleshooting. BioTeam Inc. info@bioteam.net Grid Engine 6 Troubleshooting BioTeam Inc. info@bioteam.net Grid Engine Troubleshooting There are two core problem types Job Level Cluster seems OK, example scripts work fine Some user jobs/apps fail Cluster

More information

Comparative Study of Distributed Resource Management Systems SGE, LSF, PBS Pro, and LoadLeveler

Comparative Study of Distributed Resource Management Systems SGE, LSF, PBS Pro, and LoadLeveler Comparative Study of Distributed Resource Management Systems SGE, LSF, PBS Pro, and LoadLeveler Yonghong Yan, Barbara Chapman {yanyh,chapman}@cs.uh.edu Department of Computer Science University of Houston

More information

Simplest Scalable Architecture

Simplest Scalable Architecture Simplest Scalable Architecture NOW Network Of Workstations Many types of Clusters (form HP s Dr. Bruce J. Walker) High Performance Clusters Beowulf; 1000 nodes; parallel programs; MPI Load-leveling Clusters

More information

Cluster Implementation and Management; Scheduling

Cluster Implementation and Management; Scheduling Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /

More information

Adaptive Resource Optimizer For Optimal High Performance Compute Resource Utilization

Adaptive Resource Optimizer For Optimal High Performance Compute Resource Utilization Technical Backgrounder Adaptive Resource Optimizer For Optimal High Performance Compute Resource Utilization July 2015 Introduction In a typical chip design environment, designers use thousands of CPU

More information

Condor: Grid Scheduler and the Cloud

Condor: Grid Scheduler and the Cloud Condor: Grid Scheduler and the Cloud Matthew Farrellee Senior Software Engineer, Red Hat 1 Agenda What is Condor Architecture Condor s ClassAd Language Common Use Cases Virtual Machine management Cloud

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

Operating Systems. III. Scheduling. http://soc.eurecom.fr/os/

Operating Systems. III. Scheduling. http://soc.eurecom.fr/os/ Operating Systems Institut Mines-Telecom III. Scheduling Ludovic Apvrille ludovic.apvrille@telecom-paristech.fr Eurecom, office 470 http://soc.eurecom.fr/os/ Outline Basics of Scheduling Definitions Switching

More information

The Importance of Software License Server Monitoring

The Importance of Software License Server Monitoring The Importance of Software License Server Monitoring NetworkComputer How Shorter Running Jobs Can Help In Optimizing Your Resource Utilization White Paper Introduction Semiconductor companies typically

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run SFWR ENG 3BB4 Software Design 3 Concurrent System Design 2 SFWR ENG 3BB4 Software Design 3 Concurrent System Design 11.8 10 CPU Scheduling Chapter 11 CPU Scheduling Policies Deciding which process to run

More information

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances: Scheduling Scheduling Scheduling levels Long-term scheduling. Selects which jobs shall be allowed to enter the system. Only used in batch systems. Medium-term scheduling. Performs swapin-swapout operations

More information

OPERATING SYSTEMS SCHEDULING

OPERATING SYSTEMS SCHEDULING OPERATING SYSTEMS SCHEDULING Jerry Breecher 5: CPU- 1 CPU What Is In This Chapter? This chapter is about how to get a process attached to a processor. It centers around efficient algorithms that perform

More information

Grid Engine experience in Finis Terrae, large Itanium cluster supercomputer. Pablo Rey Mayo Systems Technician, Galicia Supercomputing Centre (CESGA)

Grid Engine experience in Finis Terrae, large Itanium cluster supercomputer. Pablo Rey Mayo Systems Technician, Galicia Supercomputing Centre (CESGA) Grid Engine experience in Finis Terrae, large Itanium cluster supercomputer Pablo Rey Mayo Systems Technician, Galicia Supercomputing Centre (CESGA) Agenda Introducing CESGA Finis Terrae Architecture Grid

More information

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working

More information

Introduction to Supercomputing with Janus

Introduction to Supercomputing with Janus Introduction to Supercomputing with Janus Shelley Knuth shelley.knuth@colorado.edu Peter Ruprecht peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Who is CU Research Computing? What is a supercomputer?

More information

MapReduce Evaluator: User Guide

MapReduce Evaluator: User Guide University of A Coruña Computer Architecture Group MapReduce Evaluator: User Guide Authors: Jorge Veiga, Roberto R. Expósito, Guillermo L. Taboada and Juan Touriño December 9, 2014 Contents 1 Overview

More information

Resource Models: Batch Scheduling

Resource Models: Batch Scheduling Resource Models: Batch Scheduling Last Time» Cycle Stealing Resource Model Large Reach, Mass Heterogeneity, complex resource behavior Asynchronous Revocation, independent, idempotent tasks» Resource Sharing

More information

Enigma, Sun Grid Engine (SGE), and the Joint High Performance Computing Exchange (JHPCE) Cluster

Enigma, Sun Grid Engine (SGE), and the Joint High Performance Computing Exchange (JHPCE) Cluster Enigma, Sun Grid Engine (SGE), and the Joint High Performance Computing Exchange (JHPCE) Cluster http://www.biostat.jhsph.edu/bit/sge_lecture.ppt.pdf Marvin Newhouse Fernando J. Pineda The JHPCE staff:

More information

Installing and running COMSOL on a Linux cluster

Installing and running COMSOL on a Linux cluster Installing and running COMSOL on a Linux cluster Introduction This quick guide explains how to install and operate COMSOL Multiphysics 5.0 on a Linux cluster. It is a complement to the COMSOL Installation

More information

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note: Chapter 7 OBJECTIVES Operating Systems Define the purpose and functions of an operating system. Understand the components of an operating system. Understand the concept of virtual memory. Understand the

More information

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew

More information

An objective comparison test of workload management systems

An objective comparison test of workload management systems An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: sfiligoi@fnal.gov Abstract. The Grid

More information

Grid Engine. Application Integration

Grid Engine. Application Integration Grid Engine Application Integration Getting Stuff Done. Batch Interactive - Terminal Interactive - X11/GUI Licensed Applications Parallel Jobs DRMAA Batch Jobs Most common What is run: Shell Scripts Binaries

More information

Open Source Grid Computing Java Roundup

Open Source Grid Computing Java Roundup Open Source Grid Computing Java Roundup Nikita Ivanov www.gridgain.org Nikita Ivanov Open Source Grid Computing Java Roundup Slide 1 Introduction Nikita Ivanov Over 15 years of experience Last 7 years

More information

Dynamic Resource Distribution Across Clouds

Dynamic Resource Distribution Across Clouds University of Victoria Faculty of Engineering Winter 2010 Work Term Report Dynamic Resource Distribution Across Clouds Department of Physics University of Victoria Victoria, BC Michael Paterson V00214440

More information

Petascale Software Challenges. Piyush Chaudhary piyushc@us.ibm.com High Performance Computing

Petascale Software Challenges. Piyush Chaudhary piyushc@us.ibm.com High Performance Computing Petascale Software Challenges Piyush Chaudhary piyushc@us.ibm.com High Performance Computing Fundamental Observations Applications are struggling to realize growth in sustained performance at scale Reasons

More information

The SUN ONE Grid Engine BATCH SYSTEM

The SUN ONE Grid Engine BATCH SYSTEM The SUN ONE Grid Engine BATCH SYSTEM Juan Luis Chaves Sanabria Centro Nacional de Cálculo Científico (CeCalCULA) Latin American School in HPC on Linux Cluster October 27 November 07 2003 What is SGE? Is

More information

Program Grid and HPC5+ workshop

Program Grid and HPC5+ workshop Program Grid and HPC5+ workshop 24-30, Bahman 1391 Tuesday Wednesday 9.00-9.45 9.45-10.30 Break 11.00-11.45 11.45-12.30 Lunch 14.00-17.00 Workshop Rouhani Karimi MosalmanTabar Karimi G+MMT+K Opening IPM_Grid

More information

HTCondor at the RAL Tier-1

HTCondor at the RAL Tier-1 HTCondor at the RAL Tier-1 Andrew Lahiff, Alastair Dewhurst, John Kelly, Ian Collier, James Adams STFC Rutherford Appleton Laboratory HTCondor Week 2014 Outline Overview of HTCondor at RAL Monitoring Multi-core

More information

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect ashutosh_shinde@hotmail.com Validating if the workload generated by the load generating tools is applied

More information

CPU Scheduling. Core Definitions

CPU Scheduling. Core Definitions CPU Scheduling General rule keep the CPU busy; an idle CPU is a wasted CPU Major source of CPU idleness: I/O (or waiting for it) Many programs have a characteristic CPU I/O burst cycle alternating phases

More information

HPC-Nutzer Informationsaustausch. The Workload Management System LSF

HPC-Nutzer Informationsaustausch. The Workload Management System LSF HPC-Nutzer Informationsaustausch The Workload Management System LSF Content Cluster facts Job submission esub messages Scheduling strategies Tools and security Future plans 2 von 10 Some facts about the

More information

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF)

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) ALCF Resources: Machines & Storage Mira (Production) IBM Blue Gene/Q 49,152 nodes / 786,432 cores 768 TB of memory Peak flop rate:

More information

Advanced Techniques with Newton. Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011

Advanced Techniques with Newton. Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011 Advanced Techniques with Newton Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011 Workshop Goals Gain independence Executing your work Finding Information Fixing Problems Optimizing Effectiveness

More information

Grid Scheduling Dictionary of Terms and Keywords

Grid Scheduling Dictionary of Terms and Keywords Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status

More information

Chapter 7. Using Hadoop Cluster and MapReduce

Chapter 7. Using Hadoop Cluster and MapReduce Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in

More information

National Facility Job Management System

National Facility Job Management System National Facility Job Management System 1. Summary This document describes the job management system used by the NCI National Facility (NF) on their current systems. The system is based on a modified version

More information