Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Size: px
Start display at page:

Download "Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria"

Transcription

1 Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

2 Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew Leske Tutorial: Using WestGrid 23 October Jonatan Aronsson Tutorial: Introduction to the WestGrid Development Environment 6 November Fiona Brinkman Case Study: Genomics, Bioinformatics and HPC How Computational Analyses Are Transforming Infectious Disease Control More information on these and other seminars offered:

3 WestGrid User Basics To use WestGrid systems effectively, you will need to know: * Where to get help and information * Which systems are suited to your project * How to log on to those systems * Basic Linux commands * How to define and submit batch jobs

4 Finding Information and Getting Help * The WestGrid website: * Guidance on choosing systems and running jobs, to information about specific systems * System health and upcoming maintenance events * WestGrid Support: support@westgrid.ca * For everything from account problems to parallelization questions and code optimization advice * No question too big or too small These are the most important items to take with you today.

5 Choosing a System The WestGrid website describes each computing facility and its size, architecture, memory, interconnect, and associated storage: Some systems will be better suited to your project than others. As well, some software is only available on certain systems. We can help you find the best system for your needs.

6 Choosing a System: Here Are a Few System Cores Memory Interconnect Storage Hermes GB/node 2 x GigE, 10:1 IB 1.2 PB Nestor GB/node IB QDR 1.2 PB Hungabee TB shared IB QDR 405 TB Silo n/a n/a n/a 3.15 PB These four systems represent, in general terms, a general purpose system appropriate for serial jobs; a cluster with a high- speed interconnect, suitable for parallel jobs; a shared memory system for problems requiring large amounts of memory; and a storage site.

7 Connecting to WestGrid Your workstation Scheduler Login nodes Nestor

8 Connecting: Software You Will Need * Access to WestGrid systems is via Secure Shell (ssh) * Linux and Mac clients are included in OS * Windows: PuTTy, WinSSH * File transfer via Secure Copy (scp) or Secure FTP (sftp) * Linux and Mac clients included in OS * Windows: WinSCP, FileZilla * Grid tools also available Everything you need to know:

9 Basic Linux Commands * List directory contents: ls, ls -a, ls l * Create, remove directory: mkdir <dir>, rmdir <dir> * Change to directory, change to parent: cd <dir>, cd.. * Copy files: cp <src> <tgt> * Copy directory: cp r <src> <tgt> * Move file or directory: mv <file> <dir> * Rename file or directory: mv <oldfile> <newfile> * Remove file: rm <file> * View file: less <file>, cat <file> Everything you need to know:

10 Job Basics * Login nodes are for data management, editing and compiling source code, quick tests, and job management * The real work is done on worker nodes * Requests are submitted to the batch system and enter into an appropriate queue * Jobs are dispatched to worker nodes by the scheduler according to their priority mainly FairShare

11 Job Basics: Job Dispatch Your workstation Scheduler Login nodes Nestor

12 Job Basics: FairShare * Everybody gets a fair share based on allocation (if any) and usage * In essence, a job whose owner has had little usage over the past while will have higher priority than a job whose owner has been a heavy user hence, fair share * Some groups are given a bigger share RAC allocations

13 Job Basics: Job Definition * A batch job is defined by a script with special directives embedded that tell the cluster what s required for the job * Memory * Cores * Wall time * If your job exceeds these resources, your job may be terminated before completion :- (

14 Job Basics: Essential Commands * Submit a job: qsub <script> * Check status of jobs: qstat <job> * Check scheduling: showq * Delete a job: qdel <job> * When will my job start? showstart <job> * How do I use that command? man qstat * How do I use showq? showq --help Everything you need to know:

15 Job Basics: hello.pbs #!/bin/bash #PBS -l procs=1 #PBS -j oe #PBS -W Output_Path=$HOME/ /${PBS_JOBID%%.*} date echo "Hello, world!" echo "Am having a wonderful time in $(/bin/hostname)." echo "Love, $(whoami)"

16 Job Basics: Submitting hello.pbs * Here s where we submit the job to the cluster: westgrid# qsub hello.pbs moab01.westgrid.uvic.ca * Here we can check the status of the job: westgrid# qstat Job id Name User Time Use S Queue moab01 hello.pbs dleske 0 Q hermes * If you get unknown job ID, job has completed

17 Job Basics: Results! * When the job has completed, the output files you specified in the job script will contain the results * For example: Wed Oct 9 08:48:51 PDT 2013 Hello, world! Am having a wonderful time in hermes0195. Love, dleske * Whoop! Science! * There may be other output in these files provided by the batch system

18 Job Basics: Your First Jobs * Everything you need to get started is at: * * Run a couple of goofy little test jobs to get familiar with how the system works * qsub, qstat, showq, qdel * Something didn t work? * Job output usually provides the best clues * E- mail support@westgrid.ca BUT PLEASE

19 Job Basics: HELP! * If your job failed and you can t figure out what went wrong, send a note to support@westgrid.ca * Please include essential details: * The name of the system you are using * The job ID * Your WestGrid user ID * Also anything else we may need to know to solve your real problem.

20 Recap: WestGrid User Basics To use WestGrid systems effectively, you will need to know: * Where to get help and information * Which systems are suited to your project * How to log on to those systems * Basic Linux commands * How to define and submit batch jobs

21 Information and Help The most important things to take away with you today: * WestGrid website: * WestGrid Support: support@westgrid.ca You may also have local support at your institution. Don t be shy. We are here to support and enable you and your work.

22 Thanks for coming!

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Miami University RedHawk Cluster Working with batch jobs on the Cluster Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

WestGrid. Handbook for Researchers at the University of Manitoba. January 2010

WestGrid. Handbook for Researchers at the University of Manitoba. January 2010 WestGrid Handbook for Researchers at the University of Manitoba January 2010 2 Table of Contents Table of Contents...3 1 Overview...5 1.1 This Guide... 5 1.2 WestGrid... 5 2 Information for Grant Applicants...6

More information

Beyond Windows: Using the Linux Servers and the Grid

Beyond Windows: Using the Linux Servers and the Grid Beyond Windows: Using the Linux Servers and the Grid Topics Linux Overview How to Login & Remote Access Passwords Staying Up-To-Date Network Drives Server List The Grid Useful Commands Linux Overview Linux

More information

Introduction to Sun Grid Engine (SGE)

Introduction to Sun Grid Engine (SGE) Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry New High-performance computing cluster: PAULI Sascha Frick Institute for Physical Chemistry 02/05/2012 Sascha Frick (PHC) HPC cluster pauli 02/05/2012 1 / 24 Outline 1 About this seminar 2 New Hardware

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

Agenda. Using HPC Wales 2

Agenda. Using HPC Wales 2 Using HPC Wales Agenda Infrastructure : An Overview of our Infrastructure Logging in : Command Line Interface and File Transfer Linux Basics : Commands and Text Editors Using Modules : Managing Software

More information

Using Parallel Computing to Run Multiple Jobs

Using Parallel Computing to Run Multiple Jobs Beowulf Training Using Parallel Computing to Run Multiple Jobs Jeff Linderoth August 5, 2003 August 5, 2003 Beowulf Training Running Multiple Jobs Slide 1 Outline Introduction to Scheduling Software The

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

Grid Engine Users Guide. 2011.11p1 Edition

Grid Engine Users Guide. 2011.11p1 Edition Grid Engine Users Guide 2011.11p1 Edition Grid Engine Users Guide : 2011.11p1 Edition Published Nov 01 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

High Performance Computing

High Performance Computing High Performance Computing at Stellenbosch University Gerhard Venter Outline 1 Background 2 Clusters 3 SU History 4 SU Cluster 5 Using the Cluster 6 Examples What is High Performance Computing? Wikipedia

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

Using the Millipede cluster - I

Using the Millipede cluster - I Using the Millipede cluster - I Fokke Dijkstra, Bob Dröge High Performance Computing and Visualisation group Donald Smits Centre for Information Technology General introduction Course aimed at beginners

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

HPC system startup manual (version 1.30)

HPC system startup manual (version 1.30) HPC system startup manual (version 1.30) Document change log Issue Date Change 1 12/1/2012 New document 2 10/22/2013 Added the information of supported OS 3 10/22/2013 Changed the example 1 for data download

More information

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda High Performance Computing with Sun Grid Engine on the HPSCC cluster Fernando J. Pineda HPSCC High Performance Scientific Computing Center (HPSCC) " The Johns Hopkins Service Center in the Dept. of Biostatistics

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

SGE Roll: Users Guide. Version @VERSION@ Edition

SGE Roll: Users Guide. Version @VERSION@ Edition SGE Roll: Users Guide Version @VERSION@ Edition SGE Roll: Users Guide : Version @VERSION@ Edition Published Aug 2006 Copyright 2006 UC Regents, Scalable Systems Table of Contents Preface...i 1. Requirements...1

More information

Linux command line. An introduction to the Linux command line for genomics. Susan Fairley

Linux command line. An introduction to the Linux command line for genomics. Susan Fairley Linux command line An introduction to the Linux command line for genomics Susan Fairley Aims Introduce the command line Provide an awareness of basic functionality Illustrate with some examples Provide

More information

Ra - Batch Scripts. Timothy H. Kaiser, Ph.D. tkaiser@mines.edu

Ra - Batch Scripts. Timothy H. Kaiser, Ph.D. tkaiser@mines.edu Ra - Batch Scripts Timothy H. Kaiser, Ph.D. tkaiser@mines.edu Jobs on Ra are Run via a Batch System Ra is a shared resource Purpose: Give fair access to all users Have control over where jobs are run Set

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

Batch Scripts for RA & Mio

Batch Scripts for RA & Mio Batch Scripts for RA & Mio Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Jobs are Run via a Batch System Ra and Mio are shared resources Purpose: Give fair access to all users Have control over where jobs

More information

Introduction to the SGE/OGS batch-queuing system

Introduction to the SGE/OGS batch-queuing system Grid Computing Competence Center Introduction to the SGE/OGS batch-queuing system Riccardo Murri Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Oct. 6, 2011 The basic

More information

Tutorial Guide to the IS Unix Service

Tutorial Guide to the IS Unix Service Tutorial Guide to the IS Unix Service The aim of this guide is to help people to start using the facilities available on the Unix and Linux servers managed by Information Services. It refers in particular

More information

Job scheduler details

Job scheduler details Job scheduler details Advanced Computing Center for Research & Education (ACCRE) Job scheduler details 1 / 25 Outline 1 Batch queue system overview 2 Torque and Moab 3 Submitting jobs (ACCRE) Job scheduler

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

Job Scheduling with Moab Cluster Suite

Job Scheduling with Moab Cluster Suite Job Scheduling with Moab Cluster Suite IBM High Performance Computing February 2010 Y. Joanna Wong, Ph.D. yjw@us.ibm.com 2/22/2010 Workload Manager Torque Source: Adaptive Computing 2 Some terminology..

More information

How To Run A Steady Case On A Creeper

How To Run A Steady Case On A Creeper Crash Course Introduction to OpenFOAM Artur Lidtke University of Southampton akl1g09@soton.ac.uk November 4, 2014 Artur Lidtke Crash Course Introduction to OpenFOAM 1 / 32 What is OpenFOAM? Using OpenFOAM

More information

Quick Tutorial for Portable Batch System (PBS)

Quick Tutorial for Portable Batch System (PBS) Quick Tutorial for Portable Batch System (PBS) The Portable Batch System (PBS) system is designed to manage the distribution of batch jobs and interactive sessions across the available nodes in the cluster.

More information

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu Grid 101 Josh Hegie grid@unr.edu http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

Running applications on the Cray XC30 4/12/2015

Running applications on the Cray XC30 4/12/2015 Running applications on the Cray XC30 4/12/2015 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch jobs on compute nodes

More information

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35 HPCC USER S GUIDE Version 1.2 July 2012 IITS (Research Support) Singapore Management University IITS, Singapore Management University Page 1 of 35 Revision History Version 1.0 (27 June 2012): - Modified

More information

Cluster Computing With R

Cluster Computing With R Cluster Computing With R Stowers Institute for Medical Research R/Bioconductor Discussion Group Earl F. Glynn Scientific Programmer 18 December 2007 1 Cluster Computing With R Accessing Linux Boxes from

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

Linux Overview. Local facilities. Linux commands. The vi (gvim) editor

Linux Overview. Local facilities. Linux commands. The vi (gvim) editor Linux Overview Local facilities Linux commands The vi (gvim) editor MobiLan This system consists of a number of laptop computers (Windows) connected to a wireless Local Area Network. You need to be careful

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

Command Line Crash Course For Unix

Command Line Crash Course For Unix Command Line Crash Course For Unix Controlling Your Computer From The Terminal Zed A. Shaw December 2011 Introduction How To Use This Course You cannot learn to do this from videos alone. You can learn

More information

The XSEDE Global Federated File System (GFFS) - Breaking Down Barriers to Secure Resource Sharing

The XSEDE Global Federated File System (GFFS) - Breaking Down Barriers to Secure Resource Sharing December 19, 2013 The XSEDE Global Federated File System (GFFS) - Breaking Down Barriers to Secure Resource Sharing Andrew Grimshaw, University of Virginia Co-architect XSEDE The complexity of software

More information

Job Scheduling Explained More than you ever want to know about how jobs get scheduled on WestGrid systems...

Job Scheduling Explained More than you ever want to know about how jobs get scheduled on WestGrid systems... Job Scheduling Explained More than you ever want to know about how jobs get scheduled on WestGrid systems... Martin Siegert, SFU Cluster Myths There are so many jobs in the queue - it will take ages until

More information

Parallel Debugging with DDT

Parallel Debugging with DDT Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

High-Performance Reservoir Risk Assessment (Jacta Cluster)

High-Performance Reservoir Risk Assessment (Jacta Cluster) High-Performance Reservoir Risk Assessment (Jacta Cluster) SKUA-GOCAD 2013.1 Paradigm 2011.3 With Epos 4.1 Data Management Configuration Guide 2008 2013 Paradigm Ltd. or its affiliates and subsidiaries.

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

University of Toronto

University of Toronto 1 University of Toronto APS 105 Computer Fundamentals A Tutorial about UNIX Basics Fall 2011 I. INTRODUCTION This document serves as your introduction to the computers we will be using in this course.

More information

Usage of the mass storage system. K. Rosbach PPS 19-Feb-2008

Usage of the mass storage system. K. Rosbach PPS 19-Feb-2008 Usage of the mass storage system K. Rosbach PPS 19-Feb-2008 Disclaimer This is just a summary based on the information available online at http://dv-zeuthen.desy.de/services/dcache_osm/e717/index_eng.html

More information

Training Day : Linux

Training Day : Linux Training Day : Linux Objectives At the end of the day, you will be able to use Linux command line in order to : Connect to «genotoul» server Use available tools Transfer files between server and desktop

More information

Secure Shell. The Protocol

Secure Shell. The Protocol Usually referred to as ssh The name is used for both the program and the protocol ssh is an extremely versatile network program data encryption and compression terminal access to remote host file transfer

More information

CPSC 226 Lab Nine Fall 2015

CPSC 226 Lab Nine Fall 2015 CPSC 226 Lab Nine Fall 2015 Directions. Our overall lab goal is to learn how to use BBB/Debian as a typical Linux/ARM embedded environment, program in a traditional Linux C programming environment, and

More information

GPN - What is theGPFS HSI HTAR ISH?

GPN - What is theGPFS HSI HTAR ISH? 1/10 Storage Capacity Expansion Plan (initial) Storage Budget: $ $ $ (5PB) Back in 2009 GPFS (scratch + project) 2010-2011 2012-2013 GPFS (add 20-50%) GPFS (add 50-100%) Rationale: * the longer we wait,

More information

How To Run A Tompouce Cluster On An Ipra (Inria) 2.5.5 (Sun) 2 (Sun Geserade) 2-5.4 (Sun-Ge) 2/5.2 (

How To Run A Tompouce Cluster On An Ipra (Inria) 2.5.5 (Sun) 2 (Sun Geserade) 2-5.4 (Sun-Ge) 2/5.2 ( Running Hadoop and Stratosphere jobs on TomPouce cluster 16 October 2013 TomPouce cluster TomPouce is a cluster of 20 calcula@on nodes = 240 cores Located in the Inria Turing building (École Polytechnique)

More information

Introduction to SDSC systems and data analytics software packages "

Introduction to SDSC systems and data analytics software packages Introduction to SDSC systems and data analytics software packages " Mahidhar Tatineni (mahidhar@sdsc.edu) SDSC Summer Institute August 05, 2013 Getting Started" System Access Logging in Linux/Mac Use available

More information

An Introduction to Using the Command Line Interface (CLI) to Work with Files and Directories

An Introduction to Using the Command Line Interface (CLI) to Work with Files and Directories An Introduction to Using the Command Line Interface (CLI) to Work with Files and Directories Mac OS by bertram lyons senior consultant avpreserve AVPreserve Media Archiving & Data Management Consultants

More information

Linux Labs: mini survival guide

Linux Labs: mini survival guide Enrique Soriano, Gorka Guardiola Laboratorio de Sistemas, Grupo de Sistemas y Comunicaciones, URJC 29 de septiembre de 2011 (cc) 2010 Grupo de Sistemas y Comunicaciones. Some rights reserved. This work

More information

Introduction to Supercomputing with Janus

Introduction to Supercomputing with Janus Introduction to Supercomputing with Janus Shelley Knuth shelley.knuth@colorado.edu Peter Ruprecht peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Who is CU Research Computing? What is a supercomputer?

More information

Birmingham Environment for Academic Research. Introduction to Linux Quick Reference Guide. Research Computing Team V1.0

Birmingham Environment for Academic Research. Introduction to Linux Quick Reference Guide. Research Computing Team V1.0 Birmingham Environment for Academic Research Introduction to Linux Quick Reference Guide Research Computing Team V1.0 Contents The Basics... 4 Directory / File Permissions... 5 Process Management... 6

More information

sftp - secure file transfer program - how to transfer files to and from nrs-labs

sftp - secure file transfer program - how to transfer files to and from nrs-labs last modified: 2014-01-29 p. 1 CS 111 - useful details The purpose of this handout is to summarize several details you will need for this course: 1. sftp - how to transfer files to and from nrs-labs 2.

More information

The CNMS Computer Cluster

The CNMS Computer Cluster The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the

More information

Data management on HPC platforms

Data management on HPC platforms Data management on HPC platforms Transferring data and handling code with Git scitas.epfl.ch September 10, 2015 http://bit.ly/1jkghz4 What kind of data Categorizing data to define a strategy Based on size?

More information

Storage Capacity Expansion Plan (initial)

Storage Capacity Expansion Plan (initial) 1/14 Storage Capacity Expansion Plan (initial) Storage Budget: $ $ $ (5PB) Back in 2009 GPFS scratch + project 2010-2011 2012-2013 GPFS (add 20-50%) GPFS (add 50-100%) Rationale: * the longer we wait,

More information

Lab 1 Beginning C Program

Lab 1 Beginning C Program Lab 1 Beginning C Program Overview This lab covers the basics of compiling a basic C application program from a command line. Basic functions including printf() and scanf() are used. Simple command line

More information

WinSCP PuTTY as an alternative to F-Secure July 11, 2006

WinSCP PuTTY as an alternative to F-Secure July 11, 2006 WinSCP PuTTY as an alternative to F-Secure July 11, 2006 Brief Summary of this Document F-Secure SSH Client 5.4 Build 34 is currently the Berkeley Lab s standard SSH client. It consists of three integrated

More information

File transfer clients manual File Delivery Services

File transfer clients manual File Delivery Services File transfer clients manual File Delivery Services Publisher Post CH Ltd Information Technology Webergutstrasse 12 CH-3030 Berne (Zollikofen) Contact Post CH Ltd Information Technology Webergutstrasse

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Dec 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

Martinos Center Compute Clusters

Martinos Center Compute Clusters Intro What are the compute clusters How to gain access Housekeeping Usage Log In Submitting Jobs Queues Request CPUs/vmem Email Status I/O Interactive Dependencies Daisy Chain Wrapper Script In Progress

More information

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz)

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team (support@nesi.org.nz) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

The Maui High Performance Computing Center Department of Defense Supercomputing Resource Center (MHPCC DSRC) Hadoop Implementation on Riptide - -

The Maui High Performance Computing Center Department of Defense Supercomputing Resource Center (MHPCC DSRC) Hadoop Implementation on Riptide - - The Maui High Performance Computing Center Department of Defense Supercomputing Resource Center (MHPCC DSRC) Hadoop Implementation on Riptide - - Hadoop Implementation on Riptide 2 Table of Contents Executive

More information

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF)

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) ALCF Resources: Machines & Storage Mira (Production) IBM Blue Gene/Q 49,152 nodes / 786,432 cores 768 TB of memory Peak flop rate:

More information

Guide to the Configuration and Use of SFTP Clients for Uploading Digital Treatment Planning Data to ITC

Guide to the Configuration and Use of SFTP Clients for Uploading Digital Treatment Planning Data to ITC Guide to the Configuration and Use of SFTP Clients for Uploading Digital Treatment Planning Data to ITC The ITC has tested several SFTP client programs for submitting digital data to the ITC. These include

More information

INTRODUCTION. Yes, it really IS that easy.

INTRODUCTION. Yes, it really IS that easy. Moving from Windows to Super-computing: The 5% Solution AnnMaria De Mars, The Julia Group, Santa Monica, CA Ernesto Flores, The Julia Group, Santa Monica,CA Just think how much faster a job that takes

More information

Caltech Center for Advanced Computing Research System Guide: MRI2 Cluster (zwicky) January 2014

Caltech Center for Advanced Computing Research System Guide: MRI2 Cluster (zwicky) January 2014 1. How to Get An Account CACR Accounts 2. How to Access the Machine Connect to the front end, zwicky.cacr.caltech.edu: ssh -l username zwicky.cacr.caltech.edu or ssh username@zwicky.cacr.caltech.edu Edits,

More information

Installing and running COMSOL on a Linux cluster

Installing and running COMSOL on a Linux cluster Installing and running COMSOL on a Linux cluster Introduction This quick guide explains how to install and operate COMSOL Multiphysics 5.0 on a Linux cluster. It is a complement to the COMSOL Installation

More information

Introduction to MSI* for PubH 8403

Introduction to MSI* for PubH 8403 Introduction to MSI* for PubH 8403 Sep 30, 2015 Nancy Rowe *The Minnesota Supercomputing Institute for Advanced Computational Research Overview MSI at a Glance MSI Resources Access System Access - Physical

More information

NYUAD HPC Center Running Jobs

NYUAD HPC Center Running Jobs NYUAD HPC Center Running Jobs 1 Overview... Error! Bookmark not defined. 1.1 General List... Error! Bookmark not defined. 1.2 Compilers... Error! Bookmark not defined. 2 Loading Software... Error! Bookmark

More information

Introduction to parallel computing and UPPMAX

Introduction to parallel computing and UPPMAX Introduction to parallel computing and UPPMAX Intro part of course in Parallel Image Analysis Elias Rudberg elias.rudberg@it.uu.se March 22, 2011 Parallel computing Parallel computing is becoming increasingly

More information

The Moab Scheduler. Dan Mazur, McGill HPC daniel.mazur@mcgill.ca Aug 23, 2013

The Moab Scheduler. Dan Mazur, McGill HPC daniel.mazur@mcgill.ca Aug 23, 2013 The Moab Scheduler Dan Mazur, McGill HPC daniel.mazur@mcgill.ca Aug 23, 2013 1 Outline Fair Resource Sharing Fairness Priority Maximizing resource usage MAXPS fairness policy Minimizing queue times Should

More information

Programming for GCSE Topic H: Operating Systems

Programming for GCSE Topic H: Operating Systems Programming for GCSE Topic H: Operating Systems William Marsh School of Electronic Engineering and Computer Science Queen Mary University of London Aims Introduce Operating Systems Core concepts Processes

More information

File Transfer Examples. Running commands on other computers and transferring files between computers

File Transfer Examples. Running commands on other computers and transferring files between computers Running commands on other computers and transferring files between computers 1 1 Remote Login Login to remote computer and run programs on that computer Once logged in to remote computer, everything you

More information

Tutorial-4a: Parallel (multi-cpu) Computing

Tutorial-4a: Parallel (multi-cpu) Computing HTTP://WWW.HEP.LU.SE/COURSES/MNXB01 Introduction to Programming and Computing for Scientists (2015 HT) Tutorial-4a: Parallel (multi-cpu) Computing Balazs Konya (Lund University) Programming for Scientists

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

Overview. Remote access and file transfer. SSH clients by platform. Logging in remotely

Overview. Remote access and file transfer. SSH clients by platform. Logging in remotely Remote access and file transfer Overview Remote logins to Bio-Linux with ssh Running software from another machine Logging in from another machine Getting files on and off Bio-Linux Transferring files

More information

Unix Sampler. PEOPLE whoami id who

Unix Sampler. PEOPLE whoami id who Unix Sampler PEOPLE whoami id who finger username hostname grep pattern /etc/passwd Learn about yourself. See who is logged on Find out about the person who has an account called username on this host

More information

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina High Performance Computing Facility Specifications, Policies and Usage Supercomputer Project Bibliotheca Alexandrina Bibliotheca Alexandrina 1/16 Topics Specifications Overview Site Policies Intel Compilers

More information

1 Basic commands. 2 Terminology. CS61B, Fall 2009 Simple UNIX Commands P. N. Hilfinger

1 Basic commands. 2 Terminology. CS61B, Fall 2009 Simple UNIX Commands P. N. Hilfinger CS61B, Fall 2009 Simple UNIX Commands P. N. Hilfinger 1 Basic commands This section describes a list of commonly used commands that are available on the EECS UNIX systems. Most commands are executed by

More information

Thirty Useful Unix Commands

Thirty Useful Unix Commands Leaflet U5 Thirty Useful Unix Commands Last revised April 1997 This leaflet contains basic information on thirty of the most frequently used Unix Commands. It is intended for Unix beginners who need a

More information

SLURM Workload Manager

SLURM Workload Manager SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de

More information

Juropa. Batch Usage Introduction. May 2014 Chrysovalantis Paschoulas c.paschoulas@fz-juelich.de

Juropa. Batch Usage Introduction. May 2014 Chrysovalantis Paschoulas c.paschoulas@fz-juelich.de Juropa Batch Usage Introduction May 2014 Chrysovalantis Paschoulas c.paschoulas@fz-juelich.de Batch System Usage Model A Batch System: monitors and controls the resources on the system manages and schedules

More information

ELEC 377. Operating Systems. Week 1 Class 3

ELEC 377. Operating Systems. Week 1 Class 3 Operating Systems Week 1 Class 3 Last Class! Computer System Structure, Controllers! Interrupts & Traps! I/O structure and device queues.! Storage Structure & Caching! Hardware Protection! Dual Mode Operation

More information

Introduction to Unix Tutorial

Introduction to Unix Tutorial Topics covered in this Tutorial Introduction to Unix Tutorial 1. CSIF Computer Network 2. Local Logging in. 3. Remote computer access: ssh 4. Navigating the UNIX file structure: cd, ls, and pwd 5. Making

More information

OLCF Best Practices (and More) Bill Renaud OLCF User Assistance Group

OLCF Best Practices (and More) Bill Renaud OLCF User Assistance Group OLCF Best Practices (and More) Bill Renaud OLCF User Assistance Group Overview This presentation covers some helpful information for users of OLCF Staying informed Some aspects of system usage that may

More information

UMass High Performance Computing Center

UMass High Performance Computing Center .. UMass High Performance Computing Center University of Massachusetts Medical School October, 2014 2 / 32. Challenges of Genomic Data It is getting easier and cheaper to produce bigger genomic data every

More information