New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry

Size: px
Start display at page:

Download "New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry"

Transcription

1 New High-performance computing cluster: PAULI Sascha Frick Institute for Physical Chemistry 02/05/2012 Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

2 Outline 1 About this seminar 2 New Hardware 3 Folder Structure and Software 4 Running calculations on pauli Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

3 About this seminar About this seminar What is this talk about? New computation cluster hardware specifications Access to the cluster Folder structure, where is what, who can you ask How to start a job in Gaussian and Turbomole What is this talk not about Any kind of theory of computation methods Which method to use for which jobs How to setup job specific input files There is a module for that (chem0503) Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

4 About this seminar About this seminar What is this talk about? New computation cluster hardware specifications Access to the cluster Folder structure, where is what, who can you ask How to start a job in Gaussian and Turbomole What is this talk not about Any kind of theory of computation methods Which method to use for which jobs How to setup job specific input files There is a module for that (chem0503) Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

5 Outline New Hardware 1 About this seminar 2 New Hardware 3 Folder Structure and Software 4 Running calculations on pauli Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

6 General informations New Hardware Some general facts Named after Wolfgang Pauli (Pauli principle) Located in the computation center (RZ) Cluster belongs to Physical Chemistry Hardware and operating system administration by computation center Computation software administration by Physical Chemistry People to ask Operating system related: Dr. Klaus Nielsen Software related: Sascha Frick Prof. Dr. Bernd Hartke Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

7 General informations New Hardware Some general facts Named after Wolfgang Pauli (Pauli principle) Located in the computation center (RZ) Cluster belongs to Physical Chemistry Hardware and operating system administration by computation center Computation software administration by Physical Chemistry People to ask Operating system related: Dr. Klaus Nielsen Software related: Sascha Frick Prof. Dr. Bernd Hartke Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

8 New Hardware Technical specifications Available machines 1 Frontend system pauli (login server, file server) 6 Computation nodes pauli01-pauli06 Connected via Gigabit Ethernet Frontend CPU: 2xAMD Opteron Core, 2.4 GHz RAM: 8x4GB DDR3 PC1333 Reg. ECC (2GB/core) Storage: 8x3TB Hitachi RAID6 + Spare (netto 15TB /home) (backup) Computation nodes CPU: 4xAMD Opteron Core, 2.2 GHz RAM: 32x16GB DDR3 PC1333 Reg. ECC (8GB/core) Storage: 6x2TB Western Digital RAID5 + Spare (netto 8TB /scratch) Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

9 New Hardware Technical specifications Available machines 1 Frontend system pauli (login server, file server) 6 Computation nodes pauli01-pauli06 Connected via Gigabit Ethernet Frontend CPU: 2xAMD Opteron Core, 2.4 GHz RAM: 8x4GB DDR3 PC1333 Reg. ECC (2GB/core) Storage: 8x3TB Hitachi RAID6 + Spare (netto 15TB /home) (backup) Computation nodes CPU: 4xAMD Opteron Core, 2.2 GHz RAM: 32x16GB DDR3 PC1333 Reg. ECC (8GB/core) Storage: 6x2TB Western Digital RAID5 + Spare (netto 8TB /scratch) Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

10 New Hardware Technical specifications Available machines 1 Frontend system pauli (login server, file server) 6 Computation nodes pauli01-pauli06 Connected via Gigabit Ethernet Frontend CPU: 2xAMD Opteron Core, 2.4 GHz RAM: 8x4GB DDR3 PC1333 Reg. ECC (2GB/core) Storage: 8x3TB Hitachi RAID6 + Spare (netto 15TB /home) (backup) Computation nodes CPU: 4xAMD Opteron Core, 2.2 GHz RAM: 32x16GB DDR3 PC1333 Reg. ECC (8GB/core) Storage: 6x2TB Western Digital RAID5 + Spare (netto 8TB /scratch) Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

11 Access to pauli New Hardware Needed Account Account of computation center (suphcxxx) Old named accounts of dirac One account per workgroup for trainees, bachelors Access methods Only the frontend is directly accessable (pauli.phc.uni-kiel.de) Computation nodes are accessable from frontend (only for debugging) Access via SSH/SCP (On windows via PuTTY, WinSCP) Right now home directories are not accessable via Samba share (needed?) Computation remarks Never calculate on the frontend node, mini tests are ok Never start jobs on the compute nodes by hand, always via queuing system Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

12 Access to pauli New Hardware Needed Account Account of computation center (suphcxxx) Old named accounts of dirac One account per workgroup for trainees, bachelors Access methods Only the frontend is directly accessable (pauli.phc.uni-kiel.de) Computation nodes are accessable from frontend (only for debugging) Access via SSH/SCP (On windows via PuTTY, WinSCP) Right now home directories are not accessable via Samba share (needed?) Computation remarks Never calculate on the frontend node, mini tests are ok Never start jobs on the compute nodes by hand, always via queuing system Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

13 Access to pauli New Hardware Needed Account Account of computation center (suphcxxx) Old named accounts of dirac One account per workgroup for trainees, bachelors Access methods Only the frontend is directly accessable (pauli.phc.uni-kiel.de) Computation nodes are accessable from frontend (only for debugging) Access via SSH/SCP (On windows via PuTTY, WinSCP) Right now home directories are not accessable via Samba share (needed?) Computation remarks Never calculate on the frontend node, mini tests are ok Never start jobs on the compute nodes by hand, always via queuing system Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

14 New Hardware Computation center account Formular For new accounts download computation center formular 1 For Institutsrechner fill in pauli All existing suphc accounts allready have pauli access Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

15 New Hardware Computation center account Formular For new accounts download computation center formular 1 For Institutsrechner fill in pauli All existing suphc accounts allready have pauli access Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

16 PuTTY New Hardware Download PuTTY client Official download address: sgtatham/putty/download.html Choose Windows x86 version (putty.exe) Login to pauli.phc.uni-kiel.de (from PHC/PCTC net) Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

17 PuTTY New Hardware Download PuTTY client Official download address: sgtatham/putty/download.html Choose Windows x86 version (putty.exe) Login to pauli.phc.uni-kiel.de (from PHC/PCTC net) Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

18 WinSCP New Hardware Download WinSCP client Official download address: Don t let much advertisment distract you (look for [Dowload WinSCP], choose Installation package) Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

19 WinSCP New Hardware Download WinSCP client Official download address: Don t let much advertisment distract you (look for [Dowload WinSCP], choose Installation package) Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

20 Outline Folder Structure and Software 1 About this seminar 2 New Hardware 3 Folder Structure and Software 4 Running calculations on pauli Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

21 Folder Structure and Software Folder Structure on pauli On the frontend Home directories /home/suphcxxx (dayly backuped) Software directories /home/software or /home/software_hartke (licenses) No scratch directory PBS based queuing system: torque, no time limit, no cpu limit, max 20 jobs per user Ganglia Monitoring System: On the computation nodes Home and software directories mounted from frontend Local scratch directory /scratch, /scr1, /scr2 (all the same) Mindset: Save computation data locally on the computation node to prevent network traffic Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

22 Folder Structure and Software Folder Structure on pauli On the frontend Home directories /home/suphcxxx (dayly backuped) Software directories /home/software or /home/software_hartke (licenses) No scratch directory PBS based queuing system: torque, no time limit, no cpu limit, max 20 jobs per user Ganglia Monitoring System: On the computation nodes Home and software directories mounted from frontend Local scratch directory /scratch, /scr1, /scr2 (all the same) Mindset: Save computation data locally on the computation node to prevent network traffic Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

23 Folder Structure and Software Mini crash-course Linux Basic Linux commands Login to machine: ssh List directory content: ls <dirname> (without <dirname> current) Change / create Directory: cd <dirname> / mkdir <dirname> Copy file: cp <oldfile> <newfile> (cp -r for directories) Remove file: rm <file> (rm -r for directories) Remote Copy: scp <newfile> Move file: mv <file> path/to/<newfile> Sow file content: cat <filename> (more <filename> for long files) Editors: vim, emacs (look for seperate tutorial) Table of processes: top (quit with q) List procecces: ps (e.g. ps aux grep <userid> for user s processes) Search in file: grep "<searchphrase>" <filename> (case sensitive) More info for a command: man <command> Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

24 Folder Structure and Software Mini crash-course PBS Basic PBS commands Submit job: qsub <scriptname> List jobs: qstat -a (-n to show which nodes are used) Delete job: qdel <jobid> List node infos: pbsnodes -a PBS Options (usually used in script) Job name: -N <jobname> Output/Error file: -o <outputfile> / -e <errorfile> Mail Settings: -m n/a/b/e (n=none, a=abort, b=begin, e=end) Mail Address: -M <mailaddress> Memory: -l mem=<num>gb CPU s: -l nodes=<number of nodes>:ppn=<cpus per node> Wall time: -l walltime=hh:mm:ss ( = <job duration>) Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

25 Folder Structure and Software Mini crash-course PBS Basic PBS commands Submit job: qsub <scriptname> List jobs: qstat -a (-n to show which nodes are used) Delete job: qdel <jobid> List node infos: pbsnodes -a PBS Options (usually used in script) Job name: -N <jobname> Output/Error file: -o <outputfile> / -e <errorfile> Mail Settings: -m n/a/b/e (n=none, a=abort, b=begin, e=end) Mail Address: -M <mailaddress> Memory: -l mem=<num>gb CPU s: -l nodes=<number of nodes>:ppn=<cpus per node> Wall time: -l walltime=hh:mm:ss ( = <job duration>) Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

26 Folder Structure and Software Mini crash-course PBS Basic PBS commands Submit job: qsub <scriptname> List jobs: qstat -a (-n to show which nodes are used) Delete job: qdel <jobid> List node infos: pbsnodes -a PBS Options (usually used in script) Job name: -N <jobname> Output/Error file: -o <outputfile> / -e <errorfile> Mail Settings: -m n/a/b/e (n=none, a=abort, b=begin, e=end) Mail Address: -M <mailaddress> Memory: -l mem=<num>gb CPU s: -l nodes=<number of nodes>:ppn=<cpus per node> Wall time: -l walltime=hh:mm:ss ( = <job duration>) Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

27 Folder Structure and Software General Preparation Prepare system by hand Create passwordless login to compute nodes Login to pauli: ssh Create DSA Keypair: ssh-keygen -t dsa -f /.ssh/id_dsa -N "" Allow this key: cat /.ssh/id_dsa.pub» /.ssh/authorized_keys Add nodes to known_hosts file Add to user s known_hosts (usind prepared file): cat /home/software/known_hosts» /.ssh/known_hosts Create user s scratch directory Needs to be executed for every compute node (pauli01-pauli06) Execute: ssh pauli0x "mkdir -p /scratch/<userid>" Or use prepared script Execute: /home/software/prepare_for_pauli.sh Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

28 Folder Structure and Software General Preparation Prepare system by hand Create passwordless login to compute nodes Login to pauli: ssh Create DSA Keypair: ssh-keygen -t dsa -f /.ssh/id_dsa -N "" Allow this key: cat /.ssh/id_dsa.pub» /.ssh/authorized_keys Add nodes to known_hosts file Add to user s known_hosts (usind prepared file): cat /home/software/known_hosts» /.ssh/known_hosts Create user s scratch directory Needs to be executed for every compute node (pauli01-pauli06) Execute: ssh pauli0x "mkdir -p /scratch/<userid>" Or use prepared script Execute: /home/software/prepare_for_pauli.sh Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

29 Folder Structure and Software General Preparation Prepare system by hand Create passwordless login to compute nodes Login to pauli: ssh Create DSA Keypair: ssh-keygen -t dsa -f /.ssh/id_dsa -N "" Allow this key: cat /.ssh/id_dsa.pub» /.ssh/authorized_keys Add nodes to known_hosts file Add to user s known_hosts (usind prepared file): cat /home/software/known_hosts» /.ssh/known_hosts Create user s scratch directory Needs to be executed for every compute node (pauli01-pauli06) Execute: ssh pauli0x "mkdir -p /scratch/<userid>" Or use prepared script Execute: /home/software/prepare_for_pauli.sh Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

30 Folder Structure and Software General Preparation Prepare system by hand Create passwordless login to compute nodes Login to pauli: ssh Create DSA Keypair: ssh-keygen -t dsa -f /.ssh/id_dsa -N "" Allow this key: cat /.ssh/id_dsa.pub» /.ssh/authorized_keys Add nodes to known_hosts file Add to user s known_hosts (usind prepared file): cat /home/software/known_hosts» /.ssh/known_hosts Create user s scratch directory Needs to be executed for every compute node (pauli01-pauli06) Execute: ssh pauli0x "mkdir -p /scratch/<userid>" Or use prepared script Execute: /home/software/prepare_for_pauli.sh Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

31 Basic Setup Folder Structure and Software DEMO Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

32 Outline Running calculations on pauli 1 About this seminar 2 New Hardware 3 Folder Structure and Software 4 Running calculations on pauli Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

33 Running calculations on pauli Running a Gaussian Job Installed versions Gaussian03 and 09 are installed on pauli, run parallel on one node Generally not useful to use more than 8-16 cores without testing Install directory: /home/software/g0x/g0x/ X=3,9 Example PBS scripts: /home/software/g0x/pbs_script Job execution Create a folder for each calculation in your home directory Create a.com or.gjf file or copy it via SCP/WinSCP Make a local copy of the prepared gaussian PBS script cp /home/software/g0x/pbs_script. Edit the PBS script according to the comments in the script Start job via qsub pbs_script and look for your job via qstat All output is local on the compute node. File transfer to and from the compute node is done in the script automatically Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

34 Running calculations on pauli Running a Gaussian Job Installed versions Gaussian03 and 09 are installed on pauli, run parallel on one node Generally not useful to use more than 8-16 cores without testing Install directory: /home/software/g0x/g0x/ X=3,9 Example PBS scripts: /home/software/g0x/pbs_script Job execution Create a folder for each calculation in your home directory Create a.com or.gjf file or copy it via SCP/WinSCP Make a local copy of the prepared gaussian PBS script cp /home/software/g0x/pbs_script. Edit the PBS script according to the comments in the script Start job via qsub pbs_script and look for your job via qstat All output is local on the compute node. File transfer to and from the compute node is done in the script automatically Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

35 Running calculations on pauli Running a Gaussian Job DEMO Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

36 Running calculations on pauli Running a Turbomole Job Installed version A MPI parallel Turbomole6.4 is installed on pauli Generally not useful to use more than 8-16 cores without testing Install directory: /home/software/turbomole6.4/turbomole/ Example PBS script: /home/software/turbomole6.4/pbs_script Job execution Create a folder for each calculation in your home directory Run define or copy input files via SCP/WinSCP Make a local copy of the prepared turbomole PBS script cp /home/software/turbomole6.4/pbs_script. Edit the PBS script according to the comments in the script Start job via qsub pbs_script and look for your job via qstat All output is in submit directory to ensure MPI process communication Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

37 Running calculations on pauli Running a Turbomole Job Installed version A MPI parallel Turbomole6.4 is installed on pauli Generally not useful to use more than 8-16 cores without testing Install directory: /home/software/turbomole6.4/turbomole/ Example PBS script: /home/software/turbomole6.4/pbs_script Job execution Create a folder for each calculation in your home directory Run define or copy input files via SCP/WinSCP Make a local copy of the prepared turbomole PBS script cp /home/software/turbomole6.4/pbs_script. Edit the PBS script according to the comments in the script Start job via qsub pbs_script and look for your job via qstat All output is in submit directory to ensure MPI process communication Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

38 Running calculations on pauli Running a Turbomole Job DEMO Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

39 Running calculations on pauli Slides download under section TEACHING at the bottom Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

40 The End Running calculations on pauli Thank you for your attention! Don t forget to pick up your AK-account passwords! Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

41 Running calculations on pauli Happy computing Happy computing Sascha Frick (PHC) HPC cluster pauli 02/05/ / 24

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Miami University RedHawk Cluster Working with batch jobs on the Cluster Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

Secure Shell. The Protocol

Secure Shell. The Protocol Usually referred to as ssh The name is used for both the program and the protocol ssh is an extremely versatile network program data encryption and compression terminal access to remote host file transfer

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35 HPCC USER S GUIDE Version 1.2 July 2012 IITS (Research Support) Singapore Management University IITS, Singapore Management University Page 1 of 35 Revision History Version 1.0 (27 June 2012): - Modified

More information

High Performance Computing

High Performance Computing High Performance Computing at Stellenbosch University Gerhard Venter Outline 1 Background 2 Clusters 3 SU History 4 SU Cluster 5 Using the Cluster 6 Examples What is High Performance Computing? Wikipedia

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

Computing at RSI. Computing at RSI. Linux, Athena, and more. RSI 2015 Staff. Research Science Institute Massachusetts Institute of Technology

Computing at RSI. Computing at RSI. Linux, Athena, and more. RSI 2015 Staff. Research Science Institute Massachusetts Institute of Technology Computing at RSI Linux, Athena, and more RSI 2015 Staff Research Science Institute Massachusetts Institute of Technology Basics Table of Contents 1 Basics Getting Started Computer Clusters 2 Linux About

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

Beyond Windows: Using the Linux Servers and the Grid

Beyond Windows: Using the Linux Servers and the Grid Beyond Windows: Using the Linux Servers and the Grid Topics Linux Overview How to Login & Remote Access Passwords Staying Up-To-Date Network Drives Server List The Grid Useful Commands Linux Overview Linux

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

Single Node Setup. Table of contents

Single Node Setup. Table of contents Table of contents 1 Purpose... 2 2 Prerequisites...2 2.1 Supported Platforms...2 2.2 Required Software... 2 2.3 Installing Software...2 3 Download...2 4 Prepare to Start the Hadoop Cluster... 3 5 Standalone

More information

C++ Programming on Linux

C++ Programming on Linux C++ Programming on Linux What is Linux?! an operating system! Unix-like CS 2308 Spring 2013 Jill Seaman! Open source! created in 1992 by Linus Torvolds! can be installed on a wide variety of hardware mobile

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

Cloud Implementation using OpenNebula

Cloud Implementation using OpenNebula Cloud Implementation using OpenNebula Best Practice Document Produced by the MARnet-led working group on campus networking Authors: Vasko Sazdovski (FCSE/MARnet), Boro Jakimovski (FCSE/MARnet) April 2016

More information

Objective. Install Matlab on Client Host. Install Putty on Windows Client

Objective. Install Matlab on Client Host. Install Putty on Windows Client Objective This guide is to detail the steps required to submit and run parallel computing Matlab jobs on the Phaeton Compute Cluster using the desktop Matlab client installed on your local machine. Please

More information

Extending Remote Desktop for Large Installations. Distributed Package Installs

Extending Remote Desktop for Large Installations. Distributed Package Installs Extending Remote Desktop for Large Installations This article describes four ways Remote Desktop can be extended for large installations. The four ways are: Distributed Package Installs, List Sharing,

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

Automated Offsite Backup with rdiff-backup

Automated Offsite Backup with rdiff-backup Automated Offsite Backup with rdiff-backup Michael Greb 2003-10-21 Contents 1 Overview 2 1.1 Conventions Used........................................... 2 2 Setting up SSH 2 2.1 Generating SSH Keys........................................

More information

A Brief Introduction to UNIX

A Brief Introduction to UNIX A Brief Introduction to UNIX Lindsay Kubasik, Geoffrey Lawler, Andrew Hilton Version 1.0 Duke University Computer Science, January 2012 Contents 1 Fundamentals of UNIX 2 1.1 What is UNIX?................................

More information

The CNMS Computer Cluster

The CNMS Computer Cluster The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the

More information

Linux Overview. Local facilities. Linux commands. The vi (gvim) editor

Linux Overview. Local facilities. Linux commands. The vi (gvim) editor Linux Overview Local facilities Linux commands The vi (gvim) editor MobiLan This system consists of a number of laptop computers (Windows) connected to a wireless Local Area Network. You need to be careful

More information

MSE 4270/6270: Getting started with Unix and computing

MSE 4270/6270: Getting started with Unix and computing MSE 4270/6270: Getting started with Unix and computing Research computing accounts on ITS Linux Cluster Fir : ITS at UVA operates a Linuxbased commodity cluster with a frontend named Fir. Access to Fir

More information

Using Parallel Computing to Run Multiple Jobs

Using Parallel Computing to Run Multiple Jobs Beowulf Training Using Parallel Computing to Run Multiple Jobs Jeff Linderoth August 5, 2003 August 5, 2003 Beowulf Training Running Multiple Jobs Slide 1 Outline Introduction to Scheduling Software The

More information

Using Ansys CFX on the HPC

Using Ansys CFX on the HPC Using Ansys CFX on the HPC Author: Peter Mills HPC Administrator University of Plymouth Using Ansys CFX on the HPC Getting Started The following is a guide to using Ansys CFX on the High Performance Computing

More information

Maxwell compute cluster

Maxwell compute cluster Maxwell compute cluster An introduction to the Maxwell compute cluster Part 1 1.1 Opening PuTTY and getting the course materials on to Maxwell 1.1.1 On the desktop, double click on the shortcut icon for

More information

University of Toronto

University of Toronto 1 University of Toronto APS 105 Computer Fundamentals A Tutorial about UNIX Basics Fall 2011 I. INTRODUCTION This document serves as your introduction to the computers we will be using in this course.

More information

GeBro-BACKUP. Die Online-Datensicherung. Manual Pro Backup Client on a NAS

GeBro-BACKUP. Die Online-Datensicherung. Manual Pro Backup Client on a NAS GeBro-BACKUP Die Online-Datensicherung. Manual Pro Backup Client on a NAS Created and tested on a QNAP TS-559 Pro Firmware 4.0.2 Intel x86 Architecture Default hardware configuration OBM v6.15.0.0 Last

More information

Grid Engine Users Guide. 2011.11p1 Edition

Grid Engine Users Guide. 2011.11p1 Edition Grid Engine Users Guide 2011.11p1 Edition Grid Engine Users Guide : 2011.11p1 Edition Published Nov 01 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

Linux Labs: mini survival guide

Linux Labs: mini survival guide Enrique Soriano, Gorka Guardiola Laboratorio de Sistemas, Grupo de Sistemas y Comunicaciones, URJC 29 de septiembre de 2011 (cc) 2010 Grupo de Sistemas y Comunicaciones. Some rights reserved. This work

More information

Libo Sun. October 15, 2014

Libo Sun. October 15, 2014 libosun@rams.colostate.edu Department of Statistics Colorado State University October 15, 2014 Outline 1 What is and 2 on multi-core computers. 3 4 What is and Many statistical analysis tasks are computationally

More information

Navigating within the GNU/Linux Filesystem

Navigating within the GNU/Linux Filesystem Navigating within the GNU/Linux Filesystem The purpose of this section is to provide a few basic GNU/Linux commands to aide new users locate areas on the ODU Turing cluster. The commands presented are

More information

Introduction to the SGE/OGS batch-queuing system

Introduction to the SGE/OGS batch-queuing system Grid Computing Competence Center Introduction to the SGE/OGS batch-queuing system Riccardo Murri Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Oct. 6, 2011 The basic

More information

Agenda. Using HPC Wales 2

Agenda. Using HPC Wales 2 Using HPC Wales Agenda Infrastructure : An Overview of our Infrastructure Logging in : Command Line Interface and File Transfer Linux Basics : Commands and Text Editors Using Modules : Managing Software

More information

APPLICATION NOTE. How to build pylon applications for ARM

APPLICATION NOTE. How to build pylon applications for ARM APPLICATION NOTE Version: 01 Language: 000 (English) Release Date: 31 January 2014 Application Note Table of Contents 1 Introduction... 2 2 Steps... 2 1 Introduction This document explains how pylon applications

More information

High-Performance Reservoir Risk Assessment (Jacta Cluster)

High-Performance Reservoir Risk Assessment (Jacta Cluster) High-Performance Reservoir Risk Assessment (Jacta Cluster) SKUA-GOCAD 2013.1 Paradigm 2011.3 With Epos 4.1 Data Management Configuration Guide 2008 2013 Paradigm Ltd. or its affiliates and subsidiaries.

More information

Tutorial Guide to the IS Unix Service

Tutorial Guide to the IS Unix Service Tutorial Guide to the IS Unix Service The aim of this guide is to help people to start using the facilities available on the Unix and Linux servers managed by Information Services. It refers in particular

More information

The objective of this lab is to learn how to set up an environment for running distributed Hadoop applications.

The objective of this lab is to learn how to set up an environment for running distributed Hadoop applications. Lab 9: Hadoop Development The objective of this lab is to learn how to set up an environment for running distributed Hadoop applications. Introduction Hadoop can be run in one of three modes: Standalone

More information

High-Performance Computing

High-Performance Computing High-Performance Computing Windows, Matlab and the HPC Dr. Leigh Brookshaw Dept. of Maths and Computing, USQ 1 The HPC Architecture 30 Sun boxes or nodes Each node has 2 x 2.4GHz AMD CPUs with 4 Cores

More information

Installing a Symantec Backup Exec Agent on a SnapScale Cluster X2 Node or SnapServer DX1 or DX2. Summary

Installing a Symantec Backup Exec Agent on a SnapScale Cluster X2 Node or SnapServer DX1 or DX2. Summary Technical Bulletin Application Note April 2013 Installing a Symantec Backup Exec Agent on a SnapScale Cluster X2 Node or SnapServer DX1 or DX2 Summary This application note describes how to install the

More information

HPCC Hrothgar Getting Started User Guide Submitting and Running Jobs

HPCC Hrothgar Getting Started User Guide Submitting and Running Jobs HPCC Hrothgar Getting Started User Guide Submitting and Running Jobs High Performance Computing Center Texas Tech University HPCC Hrothgar 2 Table of Contents 1. Submitting and Running Jobs on HROTHGAR...

More information

Using bastion.service.rug.nl to connect to your PC at work

Using bastion.service.rug.nl to connect to your PC at work Using bastion.service.rug.nl to connect to your PC at work This document consists of three parts: 1. Connecting to your PC at work through the Bastion-host using SSH. Which step do you take to connect

More information

FEEG6002 - Applied Programming 6 - Working Remotely on Linux Server

FEEG6002 - Applied Programming 6 - Working Remotely on Linux Server FEEG6002 - Applied Programming 6 - Working Remotely on Linux Server Sam Sinayoko 2015-11-06 1 / 25 Outline Learning Outcomes Introduction Connecting to Linux server Transfering files to Linux server Text

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

SLURM Workload Manager

SLURM Workload Manager SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux

More information

Running CFX on the UB CCR Cluster

Running CFX on the UB CCR Cluster Running CFX on the UB CCR Cluster Introduction to the UB CCR Cluster Getting Help Hardware Resources Software Resources Computing Environment Data Storage Login and File Transfer UBVPN Login and Logout

More information

Caltech Center for Advanced Computing Research System Guide: MRI2 Cluster (zwicky) January 2014

Caltech Center for Advanced Computing Research System Guide: MRI2 Cluster (zwicky) January 2014 1. How to Get An Account CACR Accounts 2. How to Access the Machine Connect to the front end, zwicky.cacr.caltech.edu: ssh -l username zwicky.cacr.caltech.edu or ssh username@zwicky.cacr.caltech.edu Edits,

More information

Linux Commands. Dr. Hyrum D. Carroll. August 25, 2016

Linux Commands. Dr. Hyrum D. Carroll. August 25, 2016 Linux Commands Dr. Hyrum D. Carroll August 25, 2016 Quiz Linux Commands Linux Commands I (review) man ls cd touch cp mv rm mkdir rmdir Linux Commands I (review) man - show manual for a command, example:

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de

More information

Miami University RedHawk Cluster Connecting to the Cluster Using Windows

Miami University RedHawk Cluster Connecting to the Cluster Using Windows Miami University RedHawk Cluster Connecting to the Cluster Using Windows The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

Introduction to Sun Grid Engine (SGE)

Introduction to Sun Grid Engine (SGE) Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

Using the Millipede cluster - I

Using the Millipede cluster - I Using the Millipede cluster - I Fokke Dijkstra, Bob Dröge High Performance Computing and Visualisation group Donald Smits Centre for Information Technology General introduction Course aimed at beginners

More information

Cluster Computing With R

Cluster Computing With R Cluster Computing With R Stowers Institute for Medical Research R/Bioconductor Discussion Group Earl F. Glynn Scientific Programmer 18 December 2007 1 Cluster Computing With R Accessing Linux Boxes from

More information

SSH and Basic Commands

SSH and Basic Commands SSH and Basic Commands In this tutorial we'll introduce you to SSH - a tool that allows you to send remote commands to your Web server - and show you some simple UNIX commands to help you manage your website.

More information

WinSCP PuTTY as an alternative to F-Secure July 11, 2006

WinSCP PuTTY as an alternative to F-Secure July 11, 2006 WinSCP PuTTY as an alternative to F-Secure July 11, 2006 Brief Summary of this Document F-Secure SSH Client 5.4 Build 34 is currently the Berkeley Lab s standard SSH client. It consists of three integrated

More information

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda High Performance Computing with Sun Grid Engine on the HPSCC cluster Fernando J. Pineda HPSCC High Performance Scientific Computing Center (HPSCC) " The Johns Hopkins Service Center in the Dept. of Biostatistics

More information

Running Hadoop and Stratosphere jobs on TomPouce cluster. 16 October 2013

Running Hadoop and Stratosphere jobs on TomPouce cluster. 16 October 2013 Running Hadoop and Stratosphere jobs on TomPouce cluster 16 October 2013 TomPouce cluster TomPouce is a cluster of 20 calcula@on nodes = 240 cores Located in the Inria Turing building (École Polytechnique)

More information

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina High Performance Computing Facility Specifications, Policies and Usage Supercomputer Project Bibliotheca Alexandrina Bibliotheca Alexandrina 1/16 Topics Specifications Overview Site Policies Intel Compilers

More information

NorduGrid ARC Tutorial

NorduGrid ARC Tutorial NorduGrid ARC Tutorial / Arto Teräs and Olli Tourunen 2006-03-23 Slide 1(34) NorduGrid ARC Tutorial Arto Teräs and Olli Tourunen CSC, Espoo, Finland March 23

More information

Inquiry Cluster Administration. Christopher Dwan, Bioteam First Delivered at IICB, Kolkata, India December 14, 2009

Inquiry Cluster Administration. Christopher Dwan, Bioteam First Delivered at IICB, Kolkata, India December 14, 2009 Inquiry Cluster Administration Christopher Dwan, Bioteam First Delivered at IICB, Kolkata, India December 14, 2009 Inquiry Cluster Administration Apple system management tools Workgroup manager: User accounts,

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Dec 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

IBM Smart Cloud guide started

IBM Smart Cloud guide started IBM Smart Cloud guide started 1. Overview Access link: https://www-147.ibm.com/cloud/enterprise/dashboard We are going to work in the IBM Smart Cloud Enterprise. The first thing we are going to do is to

More information

HPC system startup manual (version 1.30)

HPC system startup manual (version 1.30) HPC system startup manual (version 1.30) Document change log Issue Date Change 1 12/1/2012 New document 2 10/22/2013 Added the information of supported OS 3 10/22/2013 Changed the example 1 for data download

More information

Frequently Used UNIX Commands

Frequently Used UNIX Commands Frequently Used UNIX Commands Command ls Function Lists directory contents. ls -a List directory contents including hidden files. ls -l List directory contents using a long listing format that indicates

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

Partek Flow Installation Guide

Partek Flow Installation Guide Partek Flow Installation Guide Partek Flow is a web based application for genomic data analysis and visualization, which can be installed on a desktop computer, compute cluster or cloud. Users can access

More information

Intel Do-It-Yourself Challenge Lab 2: Intel Galileo s Linux side Nicolas Vailliet

Intel Do-It-Yourself Challenge Lab 2: Intel Galileo s Linux side Nicolas Vailliet Intel Do-It-Yourself Challenge Lab 2: Intel Galileo s Linux side Nicolas Vailliet www.intel-software-academic-program.com paul.guermonprez@intel.com Intel Software 2014-02-01 Prerequisites and objectives

More information

Lab 1 Beginning C Program

Lab 1 Beginning C Program Lab 1 Beginning C Program Overview This lab covers the basics of compiling a basic C application program from a command line. Basic functions including printf() and scanf() are used. Simple command line

More information

Cisco Networking Academy Program Curriculum Scope & Sequence. Fundamentals of UNIX version 2.0 (July, 2002)

Cisco Networking Academy Program Curriculum Scope & Sequence. Fundamentals of UNIX version 2.0 (July, 2002) Cisco Networking Academy Program Curriculum Scope & Sequence Fundamentals of UNIX version 2.0 (July, 2002) Course Description: Fundamentals of UNIX teaches you how to use the UNIX operating system and

More information

MFCF Grad Session 2015

MFCF Grad Session 2015 MFCF Grad Session 2015 Agenda Introduction Help Centre and requests Dept. Grad reps Linux clusters using R with MPI Remote applications Future computing direction Technical question and answer period MFCF

More information

Tutorial 0A Programming on the command line

Tutorial 0A Programming on the command line Tutorial 0A Programming on the command line Operating systems User Software Program 1 Program 2 Program n Operating System Hardware CPU Memory Disk Screen Keyboard Mouse 2 Operating systems Microsoft Apple

More information

How to Backup XenServer VM with VirtualIQ

How to Backup XenServer VM with VirtualIQ How to Backup XenServer VM with VirtualIQ 1. Using Live Backup of VM option: Live Backup: This option can be used, if user does not want to power off the VM during the backup operation. This approach takes

More information

Environment Setup, Compilation and Batchjob Systems

Environment Setup, Compilation and Batchjob Systems MPI p.1/16 Environment Setup, Compilation and Batchjob Systems Introduction to CSC s services Thomas Zwinger thomas.zwinger[at]csc.fi Computational Environment & Application CSC IT Center for Science Ltd.

More information

GAUSSIAN PBS TUTORIAL For more details,

GAUSSIAN PBS TUTORIAL For more details, GAUSSIAN PBS TUTORIAL For more details, email sv4e@virginia.edu PART 1 Running Gaussian98 on the ITC Linux Clusters at UVA: You will use the PBS (Portable Batch System) resource management software to

More information

HDFS Installation and Shell

HDFS Installation and Shell 2012 coreservlets.com and Dima May HDFS Installation and Shell Originals of slides and source code for examples: http://www.coreservlets.com/hadoop-tutorial/ Also see the customized Hadoop training courses

More information

TS-800. Configuring SSH Client Software in UNIX and Windows Environments for Use with the SFTP Access Method in SAS 9.2, SAS 9.3, and SAS 9.

TS-800. Configuring SSH Client Software in UNIX and Windows Environments for Use with the SFTP Access Method in SAS 9.2, SAS 9.3, and SAS 9. TS-800 Configuring SSH Client Software in UNIX and Windows Environments for Use with the SFTP Access Method in SAS 9.2, SAS 9.3, and SAS 9.4 dsas Table of Contents Overview... 1 Configuring OpenSSH Software

More information

Deploying Cloudera CDH (Cloudera Distribution Including Apache Hadoop) with Emulex OneConnect OCe14000 Network Adapters

Deploying Cloudera CDH (Cloudera Distribution Including Apache Hadoop) with Emulex OneConnect OCe14000 Network Adapters Deploying Cloudera CDH (Cloudera Distribution Including Apache Hadoop) with Emulex OneConnect OCe14000 Network Adapters Table of Contents Introduction... Hardware requirements... Recommended Hadoop cluster

More information

Back Up Linux And Windows Systems With BackupPC

Back Up Linux And Windows Systems With BackupPC By Falko Timme Published: 2007-01-25 14:33 Version 1.0 Author: Falko Timme Last edited 01/19/2007 This tutorial shows how you can back up Linux and Windows systems with BackupPC.

More information

Secure File Transfer Installation. Sender Recipient Attached FIles Pages Date. Development Internal/External None 11 6/23/08

Secure File Transfer Installation. Sender Recipient Attached FIles Pages Date. Development Internal/External None 11 6/23/08 Technical Note Secure File Transfer Installation Sender Recipient Attached FIles Pages Date Development Internal/External None 11 6/23/08 Overview This document explains how to install OpenSSH for Secure

More information

Windows HPC 2008 Cluster Launch

Windows HPC 2008 Cluster Launch Windows HPC 2008 Cluster Launch Regionales Rechenzentrum Erlangen (RRZE) Johannes Habich hpc@rrze.uni-erlangen.de Launch overview Small presentation and basic introduction Questions and answers Hands-On

More information

Configuration of High Performance Computing for Medical Imaging and Processing. SunGridEngine 6.2u5

Configuration of High Performance Computing for Medical Imaging and Processing. SunGridEngine 6.2u5 Configuration of High Performance Computing for Medical Imaging and Processing SunGridEngine 6.2u5 A manual guide for installing, configuring and using the cluster. Mohammad Naquiddin Abd Razak Summer

More information

Setting Up Mac OS X 10.4 Server and Clients for Xgrid, Xgrid Enabled OpenMPI, LAM-MPI, and MPICH2

Setting Up Mac OS X 10.4 Server and Clients for Xgrid, Xgrid Enabled OpenMPI, LAM-MPI, and MPICH2 Setting Up Mac OS X 10.4 Server and Clients for Xgrid, Xgrid Enabled OpenMPI, LAM-MPI, and MPICH2 Version: 2007-09-10 Gergely V. Záruba zaruba@uta.edu Abstract - In this document we outline some simple

More information

SGE Roll: Users Guide. Version @VERSION@ Edition

SGE Roll: Users Guide. Version @VERSION@ Edition SGE Roll: Users Guide Version @VERSION@ Edition SGE Roll: Users Guide : Version @VERSION@ Edition Published Aug 2006 Copyright 2006 UC Regents, Scalable Systems Table of Contents Preface...i 1. Requirements...1

More information

PuTTY/Cygwin Tutorial. By Ben Meister Written for CS 23, Winter 2007

PuTTY/Cygwin Tutorial. By Ben Meister Written for CS 23, Winter 2007 PuTTY/Cygwin Tutorial By Ben Meister Written for CS 23, Winter 2007 This tutorial will show you how to set up and use PuTTY to connect to CS Department computers using SSH, and how to install and use the

More information

3. What happens if we have 2 unique files file1 file2 and we type mv file1 file2, what happens to:

3. What happens if we have 2 unique files file1 file2 and we type mv file1 file2, what happens to: Prelab 1 Introduction to Linux 1. Review the Linux man pages for chmod, cp, hosts, kill, ls, man, more, mkdir, mv, ping, pwd, rm, rmdir, and tcpdump at http://linux.die.net/man/. Question Sheet for Prelab

More information

Introduction to Programming and Computing for Scientists

Introduction to Programming and Computing for Scientists Oxana Smirnova (Lund University) Programming for Scientists Tutorial 4b 1 / 34 Introduction to Programming and Computing for Scientists Oxana Smirnova Lund University Tutorial 4b: Grid certificates and

More information