HPC at IU Overview. Abhinav Thota Research Technologies Indiana University



Similar documents
Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Working with HPC and HTC Apps. Abhinav Thota Research Technologies Indiana University

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Hodor and Bran - Job Scheduling and PBS Scripts

Getting Started with HPC

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Martinos Center Compute Clusters

NEC HPC-Linux-Cluster

Research Technologies Data Storage for HPC

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Grid 101. Grid 101. Josh Hegie.

Miami University RedHawk Cluster Working with batch jobs on the Cluster

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

Matlab on a Supercomputer

Introduction to HPC Workshop. Center for e-research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

User s Manual

Introduction to MSI* for PubH 8403

An Introduction to High Performance Computing in the Department

Visualization Cluster Getting Started

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Using the Yale HPC Clusters

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education

Beyond Windows: Using the Linux Servers and the Grid

Quick Tutorial for Portable Batch System (PBS)

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

Introduction to SDSC systems and data analytics software packages "

Using NeSI HPC Resources. NeSI Computational Science Team

Remote & Collaborative Visualization. Texas Advanced Compu1ng Center

Introduction to Sun Grid Engine (SGE)

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Parallel Debugging with DDT

locuz.com HPC App Portal V2.0 DATASHEET

Job Scheduling with Moab Cluster Suite

Guillimin HPC Users Meeting. Bryan Caron

icer Bioinformatics Support Fall 2011

Using Parallel Computing to Run Multiple Jobs

The CNMS Computer Cluster

High Performance Computing

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education

Robert Ping UITS Research Technologies, Cyberinfrastructure and Service Center Indiana University Pervasive Technology Institute

Job scheduler details

Caltech Center for Advanced Computing Research System Guide: MRI2 Cluster (zwicky) January 2014

SGE Roll: Users Guide. Version Edition

How To Run A Steady Case On A Creeper

Introduction to Supercomputing with Janus

HPC Wales Skills Academy Course Catalogue 2015

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Biowulf2 Training Session

SLURM Workload Manager

MFCF Grad Session 2015

Manual for using Super Computing Resources

Batch Scripts for RA & Mio

Ra - Batch Scripts. Timothy H. Kaiser, Ph.D. tkaiser@mines.edu

HPC system startup manual (version 1.30)

Introduction to parallel computing and UPPMAX

Using the Yale HPC Clusters

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29.

Data management on HPC platforms

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

Grid Engine 6. Troubleshooting. BioTeam Inc.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24.

RA MPI Compilers Debuggers Profiling. March 25, 2009

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF)

How To Run A Tompouce Cluster On An Ipra (Inria) (Sun) 2 (Sun Geserade) (Sun-Ge) 2/5.2 (

NYUAD HPC Center Running Jobs

Introduction to Matlab Distributed Computing Server (MDCS) Dan Mazur and Pier-Luc St-Onge December 1st, 2015

GRID Computing: CAS Style

AstroCompute. AWS101 - using the cloud for Science. Brendan Bouffler ( boof ) Scientific Computing AWS. ska-astrocompute@amazon.

Running applications on the Cray XC30 4/12/2015

High-Performance Computing

To connect to the cluster, simply use a SSH or SFTP client to connect to:

Resource Management and Job Scheduling

ABAQUS High Performance Computing Environment at Nokia

HPCHadoop: MapReduce on Cray X-series

Git - Working with Remote Repositories

Grid Engine Users Guide p1 Edition

Informationsaustausch für Nutzer des Aachener HPC Clusters

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry

Installing and running COMSOL on a Linux cluster

Automating Big Data Benchmarking for Different Architectures with ALOJA

Cluster Computing With R

OLCF Best Practices (and More) Bill Renaud OLCF User Assistance Group

Hands-On Microsoft Windows Server 2008

RWTH GPU Cluster. Sandra Wienke November Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

Automated Testing of Installed Software

HPCHadoop: A framework to run Hadoop on Cray X-series supercomputers

Installing IBM Websphere Application Server 7 and 8 on OS4 Enterprise Linux

PBSPro scheduling. PBS overview Qsub command: resource requests. Queues a7ribu8on. Fairshare. Backfill Jobs submission.

8/15/2014. Best (and more) General Information. Staying Informed. Staying Informed. Staying Informed-System Status

Transcription:

HPC at IU Overview Abhinav Thota Research Technologies Indiana University

What is HPC/cyberinfrastructure?

Why should you care? Data sizes are growing Need to get to the solution faster Compute power is cheap Computing resources are FREE! But only if you parallelize Single core performance has plateaued

What s the hold up?

It s not that hard, but you need a manual You only need to learn a few commands to start with We will go start with a short overview and some practical details Along with a hands-on exercise By the end of the day you will be ready to run you own jobs on Quarry and Big Red II Don t forget to take the cheat sheet that will help you get started on Big Red II

Cyberinfrastructure at IU Supercomputers: Ø Big Red II Ø Quarry Ø Mason Research Database Complex Support & Consul?ng Ø High Performance Systems (Admin) Ø Scien?fic Apps and Perf. Tuning Ø Research Analy?cs Ø Storage Ø Visualiza?on hup://researchtech.iu.edu/ Storage: Ø Data Capacitor 2 high performance file system Ø Data Capacitor Wide Area Network Ø Research File System Ø Scholarly Data Archive SoRware sales - Sta?s?cal and Mathema?cal?tles Ø SPSS Ø SAS Ø Mathema?ca Ø MATLAB Ø Maple

Supercomputing Big Red II 46 th fastest in the world at debut, 62 nd now has both CPUs & GPUs For large parallel apps Quarry a general purpose cluster For serial apps Mason a large memory cluster 512 GB of memory/node For apps that need large memory Shoot us an email if you are unsure about which system to use for your work: sciapt@iu.edu Research Database Complex (RDC) host your research databases and data intensive applications Common terms: CPU, GPU, Core, Node

How do I get access? I am sure you all already did this Create an account: https://itaccounts.iu.edu/ Graduate students with hourly positions may not see BRII listed, please contact us for an account! ssh to: bigred2.uits.iu.edu quarry.uits.indiana.edu mason.indiana.edu IU network ID and pass phrase Read the message of the day (MOTD) hups://kb.iu.edu/d/achr

SSH and SCP Sounds great, where do I login? Not in a browser L SSH is used to login to a remote machine SCP is used to copy files to and from a remote machine Example: ssh username@bigred2.uits.iu.edu!! scp local-file username@bigred2.uits.iu.edu:/path/to/remote/dir! hups://kb.iu.edu/d/aelc hups://kb.iu.edu/d/agye

Environment in your login session Home directory: /N/u/<username>/BigRed2 Scratch space for temporary files: /N/dc2/scratch/<username> Bash shell Changes via hps-admin@iu.edu hups://kb.iu.edu/d/avmj

How to find and use software module command for adding software to your environment module avail module list module load <key> module unload <key> Permanently configure your environment via the.modules file Modules arranged in groups, development, applications, life sciences, etc. hups://kb.iu.edu/d/bcwy

Where else can I look for software For software already on the machine: https://cybergateway.uits.iu.edu/iugateway/searchmodulesinfo Also, on https://cybergateway.uits.iu.edu you can look at your usage stats (both compute and storage) Status of the machines Look at the queues, your jobs in the queue, etc. Statistical Software: http://rt.uits.iu.edu/visualiza?on/analy?cs/software.php HPC Licensed sorware: hup://rt.uits.iu.edu/systems/sciapt/licensed- sorware.php

Queues Hundreds of nodes and thousands of cores are shared among the users This is done with the help of a scheduler and queues Submit jobs: Put together the commands that you want to run Add a few PBS commands to go before the actual commands Submit the job Same system on all three systems Check your cheat sheet for sample scripts hups://kb.iu.edu/d/bdkd

Sample PBS script At the most basic level, on Quarry:! #!/bin/bash! #PBS -l nodes=2:ppn=6,walltime=30:00! cd /path/to/working/dir! mpirun -np 12 -machinefile $PBS_NODEFILE ~/bin/binaryname!! That s it! Just a couple of extra lines to tell PBS what you need! Put these lines in a text file and run qsub script.sh Some other useful commands to see what s going on with the queues: qstat Q #shows available queues qstat u username #shows the status of your queued jobs showq i #shows what s going to run next hups://kb.iu.edu/d/avmy

Interactive jobs For graphical applications you can launch GUIs For testing qsub I Followed by options to specify walltime, queue, etc. qsub I l walltime=00:30:00 q debug! Straightforward on Quarry Will land on an aprun node on Big Red II hups://kb.iu.edu/d/bdsi

How to get support? You can write us if you need help with anything that has to do with these machines Common questions: Help with installing a software How would I best go about doing xyz? I was thinking about xyz, would that work on Quarry? Can you help me do xyz? Most of the answers to beginner level questions are out there on the KB pages already Just need to search and then search again with different terms J

Support via email, phone, in-person We use a ticketing system for questions via email Can call us on the phone, can meet in person We are in the CIB RA is on campus too in the SSRC Personalized presentations and talks for research groups Workshops every semester

How much does this cost? The computing resources and storage are free for normal use (Unless you need huge amounts of dedicated compute time or large data storage) Short term support and consulting is also free Involves help with day to day issues Installing software Troubleshooting problems, etc Long term support and consulting is possible on a paid basis

Contact information Here are a few email addresses, but don t worry, write any one of us and we will get your question to the right people HPS (Admins): hps-admin@iu.edu Scientific Apps: sciapt@iu.edu Research Analytics: researchanalytics@iu.edu HPFS (DC2): hpfs-admin@iu.edu SDA: store-admin@iu.edu Visualization: vishelp@iu.edu