New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry



Similar documents
Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Getting Started with HPC

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

An Introduction to High Performance Computing in the Department

Secure Shell. The Protocol

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Miami University RedHawk Cluster Working with batch jobs on the Cluster

HPCC USER S GUIDE. Version 1.2 July IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35

High Performance Computing

Hodor and Bran - Job Scheduling and PBS Scripts

NEC HPC-Linux-Cluster

Using the Yale HPC Clusters

Beyond Windows: Using the Linux Servers and the Grid

Manual for using Super Computing Resources

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

The Asterope compute cluster

Cloud Implementation using OpenNebula

User s Manual

Single Node Setup. Table of contents

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

The CNMS Computer Cluster

Automated Offsite Backup with rdiff-backup

Grid Engine Users Guide p1 Edition

Using Parallel Computing to Run Multiple Jobs

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education

Extending Remote Desktop for Large Installations. Distributed Package Installs

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Installing a Symantec Backup Exec Agent on a SnapScale Cluster X2 Node or SnapServer DX1 or DX2. Summary

University of Toronto

Linux Overview. Local facilities. Linux commands. The vi (gvim) editor

Parallel Processing using the LOTUS cluster

GeBro-BACKUP. Die Online-Datensicherung. Manual Pro Backup Client on a NAS

Introduction to the SGE/OGS batch-queuing system

SLURM Workload Manager

Agenda. Using HPC Wales 2

Caltech Center for Advanced Computing Research System Guide: MRI2 Cluster (zwicky) January 2014

The RWTH Compute Cluster Environment

High-Performance Reservoir Risk Assessment (Jacta Cluster)

The objective of this lab is to learn how to set up an environment for running distributed Hadoop applications.

Tutorial Guide to the IS Unix Service

Linux Labs: mini survival guide

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education

HDFS Installation and Shell

APPLICATION NOTE. How to build pylon applications for ARM

High-Performance Computing

FEEG Applied Programming 6 - Working Remotely on Linux Server

NorduGrid ARC Tutorial

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

How To Run A Tompouce Cluster On An Ipra (Inria) (Sun) 2 (Sun Geserade) (Sun-Ge) 2/5.2 (

WinSCP PuTTY as an alternative to F-Secure July 11, 2006

Introduction to Sun Grid Engine (SGE)

Cluster Computing With R

White Paper. Fabasoft on Linux Cluster Support. Fabasoft Folio 2015 Update Rollup 2

Secure File Transfer Installation. Sender Recipient Attached FIles Pages Date. Development Internal/External None 11 6/23/08

Miami University RedHawk Cluster Connecting to the Cluster Using Windows

IBM Smart Cloud guide started

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

Using the Yale HPC Clusters

SSH and Basic Commands

SGE Roll: Users Guide. Version Edition

Configuration of High Performance Computing for Medical Imaging and Processing. SunGridEngine 6.2u5

Partek Flow Installation Guide

Grid 101. Grid 101. Josh Hegie.

HPC system startup manual (version 1.30)

Tutorial 0A Programming on the command line

Intel Do-It-Yourself Challenge Lab 2: Intel Galileo s Linux side Nicolas Vailliet

Lab 1 Beginning C Program

Cisco Networking Academy Program Curriculum Scope & Sequence. Fundamentals of UNIX version 2.0 (July, 2002)

Introduction to SDSC systems and data analytics software packages "

Deploying Cloudera CDH (Cloudera Distribution Including Apache Hadoop) with Emulex OneConnect OCe14000 Network Adapters

Back Up Linux And Windows Systems With BackupPC

TS-800. Configuring SSH Client Software in UNIX and Windows Environments for Use with the SFTP Access Method in SAS 9.2, SAS 9.3, and SAS 9.

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF)

CASHNet Secure File Transfer Instructions

PuTTY/Cygwin Tutorial. By Ben Meister Written for CS 23, Winter 2007

How to Backup XenServer VM with VirtualIQ

MFCF Grad Session 2015

Windows HPC 2008 Cluster Launch

Visualization Cluster Getting Started

Deploy Apache Hadoop with Emulex OneConnect OCe14000 Ethernet Network Adapters

Using NeSI HPC Resources. NeSI Computational Science Team

INF-110. GPFS Installation

Data management on HPC platforms

HADOOP - MULTI NODE CLUSTER

Guide to the Configuration and Use of SFTP Clients for Uploading Digital Treatment Planning Data to ITC

Enigma, Sun Grid Engine (SGE), and the Joint High Performance Computing Exchange (JHPCE) Cluster

Setting Up Mac OS X 10.4 Server and Clients for Xgrid, Xgrid Enabled OpenMPI, LAM-MPI, and MPICH2

File transfer clients manual File Delivery Services

TimeIPS Server. IPS256T Virtual Machine. Installation Guide

Guide to the Configuration and Use of SFTP Clients for Uploading Digital Treatment Planning Data to IROC RI

Transcription:

New High-performance computing cluster: PAULI Sascha Frick Institute for Physical Chemistry 02/05/2012 Sascha Frick (PHC) HPC cluster pauli 02/05/2012 1 / 24

Outline 1 About this seminar 2 New Hardware 3 Folder Structure and Software 4 Running calculations on pauli Sascha Frick (PHC) HPC cluster pauli 02/05/2012 2 / 24

About this seminar About this seminar What is this talk about? New computation cluster hardware specifications Access to the cluster Folder structure, where is what, who can you ask How to start a job in Gaussian and Turbomole What is this talk not about Any kind of theory of computation methods Which method to use for which jobs How to setup job specific input files There is a module for that (chem0503) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 3 / 24

About this seminar About this seminar What is this talk about? New computation cluster hardware specifications Access to the cluster Folder structure, where is what, who can you ask How to start a job in Gaussian and Turbomole What is this talk not about Any kind of theory of computation methods Which method to use for which jobs How to setup job specific input files There is a module for that (chem0503) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 3 / 24

Outline New Hardware 1 About this seminar 2 New Hardware 3 Folder Structure and Software 4 Running calculations on pauli Sascha Frick (PHC) HPC cluster pauli 02/05/2012 4 / 24

General informations New Hardware Some general facts Named after Wolfgang Pauli (Pauli principle) Located in the computation center (RZ) Cluster belongs to Physical Chemistry Hardware and operating system administration by computation center Computation software administration by Physical Chemistry People to ask Operating system related: Dr. Klaus Nielsen (nielsen@rz.uni.kiel.de) Software related: Sascha Frick (frick@pctc.uni-kiel.de) Prof. Dr. Bernd Hartke (hartke@pctc.uni-kiel.de) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 5 / 24

General informations New Hardware Some general facts Named after Wolfgang Pauli (Pauli principle) Located in the computation center (RZ) Cluster belongs to Physical Chemistry Hardware and operating system administration by computation center Computation software administration by Physical Chemistry People to ask Operating system related: Dr. Klaus Nielsen (nielsen@rz.uni.kiel.de) Software related: Sascha Frick (frick@pctc.uni-kiel.de) Prof. Dr. Bernd Hartke (hartke@pctc.uni-kiel.de) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 5 / 24

New Hardware Technical specifications Available machines 1 Frontend system pauli (login server, file server) 6 Computation nodes pauli01-pauli06 Connected via Gigabit Ethernet Frontend CPU: 2xAMD Opteron 6212 8-Core, 2.4 GHz RAM: 8x4GB DDR3 PC1333 Reg. ECC (2GB/core) Storage: 8x3TB Hitachi RAID6 + Spare (netto 15TB /home) (backup) Computation nodes CPU: 4xAMD Opteron 6274 16-Core, 2.2 GHz RAM: 32x16GB DDR3 PC1333 Reg. ECC (8GB/core) Storage: 6x2TB Western Digital RAID5 + Spare (netto 8TB /scratch) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 6 / 24

New Hardware Technical specifications Available machines 1 Frontend system pauli (login server, file server) 6 Computation nodes pauli01-pauli06 Connected via Gigabit Ethernet Frontend CPU: 2xAMD Opteron 6212 8-Core, 2.4 GHz RAM: 8x4GB DDR3 PC1333 Reg. ECC (2GB/core) Storage: 8x3TB Hitachi RAID6 + Spare (netto 15TB /home) (backup) Computation nodes CPU: 4xAMD Opteron 6274 16-Core, 2.2 GHz RAM: 32x16GB DDR3 PC1333 Reg. ECC (8GB/core) Storage: 6x2TB Western Digital RAID5 + Spare (netto 8TB /scratch) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 6 / 24

New Hardware Technical specifications Available machines 1 Frontend system pauli (login server, file server) 6 Computation nodes pauli01-pauli06 Connected via Gigabit Ethernet Frontend CPU: 2xAMD Opteron 6212 8-Core, 2.4 GHz RAM: 8x4GB DDR3 PC1333 Reg. ECC (2GB/core) Storage: 8x3TB Hitachi RAID6 + Spare (netto 15TB /home) (backup) Computation nodes CPU: 4xAMD Opteron 6274 16-Core, 2.2 GHz RAM: 32x16GB DDR3 PC1333 Reg. ECC (8GB/core) Storage: 6x2TB Western Digital RAID5 + Spare (netto 8TB /scratch) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 6 / 24

Access to pauli New Hardware Needed Account Account of computation center (suphcxxx) Old named accounts of dirac One account per workgroup for trainees, bachelors Access methods Only the frontend is directly accessable (pauli.phc.uni-kiel.de) Computation nodes are accessable from frontend (only for debugging) Access via SSH/SCP (On windows via PuTTY, WinSCP) Right now home directories are not accessable via Samba share (needed?) Computation remarks Never calculate on the frontend node, mini tests are ok Never start jobs on the compute nodes by hand, always via queuing system Sascha Frick (PHC) HPC cluster pauli 02/05/2012 7 / 24

Access to pauli New Hardware Needed Account Account of computation center (suphcxxx) Old named accounts of dirac One account per workgroup for trainees, bachelors Access methods Only the frontend is directly accessable (pauli.phc.uni-kiel.de) Computation nodes are accessable from frontend (only for debugging) Access via SSH/SCP (On windows via PuTTY, WinSCP) Right now home directories are not accessable via Samba share (needed?) Computation remarks Never calculate on the frontend node, mini tests are ok Never start jobs on the compute nodes by hand, always via queuing system Sascha Frick (PHC) HPC cluster pauli 02/05/2012 7 / 24

Access to pauli New Hardware Needed Account Account of computation center (suphcxxx) Old named accounts of dirac One account per workgroup for trainees, bachelors Access methods Only the frontend is directly accessable (pauli.phc.uni-kiel.de) Computation nodes are accessable from frontend (only for debugging) Access via SSH/SCP (On windows via PuTTY, WinSCP) Right now home directories are not accessable via Samba share (needed?) Computation remarks Never calculate on the frontend node, mini tests are ok Never start jobs on the compute nodes by hand, always via queuing system Sascha Frick (PHC) HPC cluster pauli 02/05/2012 7 / 24

New Hardware Computation center account Formular For new accounts download computation center formular 1 http://www.rz.uni-kiel.de/anmeldung/formulare/form1.pdf For Institutsrechner fill in pauli All existing suphc accounts allready have pauli access Sascha Frick (PHC) HPC cluster pauli 02/05/2012 8 / 24

New Hardware Computation center account Formular For new accounts download computation center formular 1 http://www.rz.uni-kiel.de/anmeldung/formulare/form1.pdf For Institutsrechner fill in pauli All existing suphc accounts allready have pauli access Sascha Frick (PHC) HPC cluster pauli 02/05/2012 8 / 24

PuTTY New Hardware Download PuTTY client Official download address: http://www.chiark.greenend.org.uk/ sgtatham/putty/download.html Choose Windows x86 version (putty.exe) Login to pauli.phc.uni-kiel.de (from PHC/PCTC net) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 9 / 24

PuTTY New Hardware Download PuTTY client Official download address: http://www.chiark.greenend.org.uk/ sgtatham/putty/download.html Choose Windows x86 version (putty.exe) Login to pauli.phc.uni-kiel.de (from PHC/PCTC net) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 9 / 24

WinSCP New Hardware Download WinSCP client Official download address: http://winscp.net/eng/download.php Don t let much advertisment distract you (look for [Dowload WinSCP], choose Installation package) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 10 / 24

WinSCP New Hardware Download WinSCP client Official download address: http://winscp.net/eng/download.php Don t let much advertisment distract you (look for [Dowload WinSCP], choose Installation package) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 10 / 24

Outline Folder Structure and Software 1 About this seminar 2 New Hardware 3 Folder Structure and Software 4 Running calculations on pauli Sascha Frick (PHC) HPC cluster pauli 02/05/2012 11 / 24

Folder Structure and Software Folder Structure on pauli On the frontend Home directories /home/suphcxxx (dayly backuped) Software directories /home/software or /home/software_hartke (licenses) No scratch directory PBS based queuing system: torque, no time limit, no cpu limit, max 20 jobs per user Ganglia Monitoring System: http://pauli.phc.uni-kiel.de/ganglia/ On the computation nodes Home and software directories mounted from frontend Local scratch directory /scratch, /scr1, /scr2 (all the same) Mindset: Save computation data locally on the computation node to prevent network traffic Sascha Frick (PHC) HPC cluster pauli 02/05/2012 12 / 24

Folder Structure and Software Folder Structure on pauli On the frontend Home directories /home/suphcxxx (dayly backuped) Software directories /home/software or /home/software_hartke (licenses) No scratch directory PBS based queuing system: torque, no time limit, no cpu limit, max 20 jobs per user Ganglia Monitoring System: http://pauli.phc.uni-kiel.de/ganglia/ On the computation nodes Home and software directories mounted from frontend Local scratch directory /scratch, /scr1, /scr2 (all the same) Mindset: Save computation data locally on the computation node to prevent network traffic Sascha Frick (PHC) HPC cluster pauli 02/05/2012 12 / 24

Folder Structure and Software Mini crash-course Linux Basic Linux commands Login to machine: ssh <userid>@pauli.phc.uni-kiel.de List directory content: ls <dirname> (without <dirname> current) Change / create Directory: cd <dirname> / mkdir <dirname> Copy file: cp <oldfile> <newfile> (cp -r for directories) Remove file: rm <file> (rm -r for directories) Remote Copy: scp <userid>@host:/path/to/<oldfile> <newfile> Move file: mv <file> path/to/<newfile> Sow file content: cat <filename> (more <filename> for long files) Editors: vim, emacs (look for seperate tutorial) Table of processes: top (quit with q) List procecces: ps (e.g. ps aux grep <userid> for user s processes) Search in file: grep "<searchphrase>" <filename> (case sensitive) More info for a command: man <command> Sascha Frick (PHC) HPC cluster pauli 02/05/2012 13 / 24

Folder Structure and Software Mini crash-course PBS Basic PBS commands Submit job: qsub <scriptname> List jobs: qstat -a (-n to show which nodes are used) Delete job: qdel <jobid> List node infos: pbsnodes -a PBS Options (usually used in script) Job name: -N <jobname> Output/Error file: -o <outputfile> / -e <errorfile> Mail Settings: -m n/a/b/e (n=none, a=abort, b=begin, e=end) Mail Address: -M <mailaddress> Memory: -l mem=<num>gb CPU s: -l nodes=<number of nodes>:ppn=<cpus per node> Wall time: -l walltime=hh:mm:ss ( = <job duration>) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 14 / 24

Folder Structure and Software Mini crash-course PBS Basic PBS commands Submit job: qsub <scriptname> List jobs: qstat -a (-n to show which nodes are used) Delete job: qdel <jobid> List node infos: pbsnodes -a PBS Options (usually used in script) Job name: -N <jobname> Output/Error file: -o <outputfile> / -e <errorfile> Mail Settings: -m n/a/b/e (n=none, a=abort, b=begin, e=end) Mail Address: -M <mailaddress> Memory: -l mem=<num>gb CPU s: -l nodes=<number of nodes>:ppn=<cpus per node> Wall time: -l walltime=hh:mm:ss ( = <job duration>) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 14 / 24

Folder Structure and Software Mini crash-course PBS Basic PBS commands Submit job: qsub <scriptname> List jobs: qstat -a (-n to show which nodes are used) Delete job: qdel <jobid> List node infos: pbsnodes -a PBS Options (usually used in script) Job name: -N <jobname> Output/Error file: -o <outputfile> / -e <errorfile> Mail Settings: -m n/a/b/e (n=none, a=abort, b=begin, e=end) Mail Address: -M <mailaddress> Memory: -l mem=<num>gb CPU s: -l nodes=<number of nodes>:ppn=<cpus per node> Wall time: -l walltime=hh:mm:ss ( = <job duration>) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 14 / 24

Folder Structure and Software General Preparation Prepare system by hand Create passwordless login to compute nodes Login to pauli: ssh <userid>@pauli.phc.uni-kiel.de Create DSA Keypair: ssh-keygen -t dsa -f /.ssh/id_dsa -N "" Allow this key: cat /.ssh/id_dsa.pub» /.ssh/authorized_keys Add nodes to known_hosts file Add to user s known_hosts (usind prepared file): cat /home/software/known_hosts» /.ssh/known_hosts Create user s scratch directory Needs to be executed for every compute node (pauli01-pauli06) Execute: ssh pauli0x "mkdir -p /scratch/<userid>" Or use prepared script Execute: /home/software/prepare_for_pauli.sh Sascha Frick (PHC) HPC cluster pauli 02/05/2012 15 / 24

Folder Structure and Software General Preparation Prepare system by hand Create passwordless login to compute nodes Login to pauli: ssh <userid>@pauli.phc.uni-kiel.de Create DSA Keypair: ssh-keygen -t dsa -f /.ssh/id_dsa -N "" Allow this key: cat /.ssh/id_dsa.pub» /.ssh/authorized_keys Add nodes to known_hosts file Add to user s known_hosts (usind prepared file): cat /home/software/known_hosts» /.ssh/known_hosts Create user s scratch directory Needs to be executed for every compute node (pauli01-pauli06) Execute: ssh pauli0x "mkdir -p /scratch/<userid>" Or use prepared script Execute: /home/software/prepare_for_pauli.sh Sascha Frick (PHC) HPC cluster pauli 02/05/2012 15 / 24

Folder Structure and Software General Preparation Prepare system by hand Create passwordless login to compute nodes Login to pauli: ssh <userid>@pauli.phc.uni-kiel.de Create DSA Keypair: ssh-keygen -t dsa -f /.ssh/id_dsa -N "" Allow this key: cat /.ssh/id_dsa.pub» /.ssh/authorized_keys Add nodes to known_hosts file Add to user s known_hosts (usind prepared file): cat /home/software/known_hosts» /.ssh/known_hosts Create user s scratch directory Needs to be executed for every compute node (pauli01-pauli06) Execute: ssh pauli0x "mkdir -p /scratch/<userid>" Or use prepared script Execute: /home/software/prepare_for_pauli.sh Sascha Frick (PHC) HPC cluster pauli 02/05/2012 15 / 24

Folder Structure and Software General Preparation Prepare system by hand Create passwordless login to compute nodes Login to pauli: ssh <userid>@pauli.phc.uni-kiel.de Create DSA Keypair: ssh-keygen -t dsa -f /.ssh/id_dsa -N "" Allow this key: cat /.ssh/id_dsa.pub» /.ssh/authorized_keys Add nodes to known_hosts file Add to user s known_hosts (usind prepared file): cat /home/software/known_hosts» /.ssh/known_hosts Create user s scratch directory Needs to be executed for every compute node (pauli01-pauli06) Execute: ssh pauli0x "mkdir -p /scratch/<userid>" Or use prepared script Execute: /home/software/prepare_for_pauli.sh Sascha Frick (PHC) HPC cluster pauli 02/05/2012 15 / 24

Basic Setup Folder Structure and Software DEMO Sascha Frick (PHC) HPC cluster pauli 02/05/2012 16 / 24

Outline Running calculations on pauli 1 About this seminar 2 New Hardware 3 Folder Structure and Software 4 Running calculations on pauli Sascha Frick (PHC) HPC cluster pauli 02/05/2012 17 / 24

Running calculations on pauli Running a Gaussian Job Installed versions Gaussian03 and 09 are installed on pauli, run parallel on one node Generally not useful to use more than 8-16 cores without testing Install directory: /home/software/g0x/g0x/ X=3,9 Example PBS scripts: /home/software/g0x/pbs_script Job execution Create a folder for each calculation in your home directory Create a.com or.gjf file or copy it via SCP/WinSCP Make a local copy of the prepared gaussian PBS script cp /home/software/g0x/pbs_script. Edit the PBS script according to the comments in the script Start job via qsub pbs_script and look for your job via qstat All output is local on the compute node. File transfer to and from the compute node is done in the script automatically Sascha Frick (PHC) HPC cluster pauli 02/05/2012 18 / 24

Running calculations on pauli Running a Gaussian Job Installed versions Gaussian03 and 09 are installed on pauli, run parallel on one node Generally not useful to use more than 8-16 cores without testing Install directory: /home/software/g0x/g0x/ X=3,9 Example PBS scripts: /home/software/g0x/pbs_script Job execution Create a folder for each calculation in your home directory Create a.com or.gjf file or copy it via SCP/WinSCP Make a local copy of the prepared gaussian PBS script cp /home/software/g0x/pbs_script. Edit the PBS script according to the comments in the script Start job via qsub pbs_script and look for your job via qstat All output is local on the compute node. File transfer to and from the compute node is done in the script automatically Sascha Frick (PHC) HPC cluster pauli 02/05/2012 18 / 24

Running calculations on pauli Running a Gaussian Job DEMO Sascha Frick (PHC) HPC cluster pauli 02/05/2012 19 / 24

Running calculations on pauli Running a Turbomole Job Installed version A MPI parallel Turbomole6.4 is installed on pauli Generally not useful to use more than 8-16 cores without testing Install directory: /home/software/turbomole6.4/turbomole/ Example PBS script: /home/software/turbomole6.4/pbs_script Job execution Create a folder for each calculation in your home directory Run define or copy input files via SCP/WinSCP Make a local copy of the prepared turbomole PBS script cp /home/software/turbomole6.4/pbs_script. Edit the PBS script according to the comments in the script Start job via qsub pbs_script and look for your job via qstat All output is in submit directory to ensure MPI process communication Sascha Frick (PHC) HPC cluster pauli 02/05/2012 20 / 24

Running calculations on pauli Running a Turbomole Job Installed version A MPI parallel Turbomole6.4 is installed on pauli Generally not useful to use more than 8-16 cores without testing Install directory: /home/software/turbomole6.4/turbomole/ Example PBS script: /home/software/turbomole6.4/pbs_script Job execution Create a folder for each calculation in your home directory Run define or copy input files via SCP/WinSCP Make a local copy of the prepared turbomole PBS script cp /home/software/turbomole6.4/pbs_script. Edit the PBS script according to the comments in the script Start job via qsub pbs_script and look for your job via qstat All output is in submit directory to ensure MPI process communication Sascha Frick (PHC) HPC cluster pauli 02/05/2012 20 / 24

Running calculations on pauli Running a Turbomole Job DEMO Sascha Frick (PHC) HPC cluster pauli 02/05/2012 21 / 24

Running calculations on pauli Slides download http://ravel.pctc.uni-kiel.de/ under section TEACHING at the bottom Sascha Frick (PHC) HPC cluster pauli 02/05/2012 22 / 24

The End Running calculations on pauli Thank you for your attention! Don t forget to pick up your AK-account passwords! Sascha Frick (PHC) HPC cluster pauli 02/05/2012 23 / 24

Running calculations on pauli Happy computing Happy computing Sascha Frick (PHC) HPC cluster pauli 02/05/2012 24 / 24