New High-performance computing cluster: PAULI Sascha Frick Institute for Physical Chemistry 02/05/2012 Sascha Frick (PHC) HPC cluster pauli 02/05/2012 1 / 24
Outline 1 About this seminar 2 New Hardware 3 Folder Structure and Software 4 Running calculations on pauli Sascha Frick (PHC) HPC cluster pauli 02/05/2012 2 / 24
About this seminar About this seminar What is this talk about? New computation cluster hardware specifications Access to the cluster Folder structure, where is what, who can you ask How to start a job in Gaussian and Turbomole What is this talk not about Any kind of theory of computation methods Which method to use for which jobs How to setup job specific input files There is a module for that (chem0503) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 3 / 24
About this seminar About this seminar What is this talk about? New computation cluster hardware specifications Access to the cluster Folder structure, where is what, who can you ask How to start a job in Gaussian and Turbomole What is this talk not about Any kind of theory of computation methods Which method to use for which jobs How to setup job specific input files There is a module for that (chem0503) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 3 / 24
Outline New Hardware 1 About this seminar 2 New Hardware 3 Folder Structure and Software 4 Running calculations on pauli Sascha Frick (PHC) HPC cluster pauli 02/05/2012 4 / 24
General informations New Hardware Some general facts Named after Wolfgang Pauli (Pauli principle) Located in the computation center (RZ) Cluster belongs to Physical Chemistry Hardware and operating system administration by computation center Computation software administration by Physical Chemistry People to ask Operating system related: Dr. Klaus Nielsen (nielsen@rz.uni.kiel.de) Software related: Sascha Frick (frick@pctc.uni-kiel.de) Prof. Dr. Bernd Hartke (hartke@pctc.uni-kiel.de) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 5 / 24
General informations New Hardware Some general facts Named after Wolfgang Pauli (Pauli principle) Located in the computation center (RZ) Cluster belongs to Physical Chemistry Hardware and operating system administration by computation center Computation software administration by Physical Chemistry People to ask Operating system related: Dr. Klaus Nielsen (nielsen@rz.uni.kiel.de) Software related: Sascha Frick (frick@pctc.uni-kiel.de) Prof. Dr. Bernd Hartke (hartke@pctc.uni-kiel.de) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 5 / 24
New Hardware Technical specifications Available machines 1 Frontend system pauli (login server, file server) 6 Computation nodes pauli01-pauli06 Connected via Gigabit Ethernet Frontend CPU: 2xAMD Opteron 6212 8-Core, 2.4 GHz RAM: 8x4GB DDR3 PC1333 Reg. ECC (2GB/core) Storage: 8x3TB Hitachi RAID6 + Spare (netto 15TB /home) (backup) Computation nodes CPU: 4xAMD Opteron 6274 16-Core, 2.2 GHz RAM: 32x16GB DDR3 PC1333 Reg. ECC (8GB/core) Storage: 6x2TB Western Digital RAID5 + Spare (netto 8TB /scratch) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 6 / 24
New Hardware Technical specifications Available machines 1 Frontend system pauli (login server, file server) 6 Computation nodes pauli01-pauli06 Connected via Gigabit Ethernet Frontend CPU: 2xAMD Opteron 6212 8-Core, 2.4 GHz RAM: 8x4GB DDR3 PC1333 Reg. ECC (2GB/core) Storage: 8x3TB Hitachi RAID6 + Spare (netto 15TB /home) (backup) Computation nodes CPU: 4xAMD Opteron 6274 16-Core, 2.2 GHz RAM: 32x16GB DDR3 PC1333 Reg. ECC (8GB/core) Storage: 6x2TB Western Digital RAID5 + Spare (netto 8TB /scratch) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 6 / 24
New Hardware Technical specifications Available machines 1 Frontend system pauli (login server, file server) 6 Computation nodes pauli01-pauli06 Connected via Gigabit Ethernet Frontend CPU: 2xAMD Opteron 6212 8-Core, 2.4 GHz RAM: 8x4GB DDR3 PC1333 Reg. ECC (2GB/core) Storage: 8x3TB Hitachi RAID6 + Spare (netto 15TB /home) (backup) Computation nodes CPU: 4xAMD Opteron 6274 16-Core, 2.2 GHz RAM: 32x16GB DDR3 PC1333 Reg. ECC (8GB/core) Storage: 6x2TB Western Digital RAID5 + Spare (netto 8TB /scratch) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 6 / 24
Access to pauli New Hardware Needed Account Account of computation center (suphcxxx) Old named accounts of dirac One account per workgroup for trainees, bachelors Access methods Only the frontend is directly accessable (pauli.phc.uni-kiel.de) Computation nodes are accessable from frontend (only for debugging) Access via SSH/SCP (On windows via PuTTY, WinSCP) Right now home directories are not accessable via Samba share (needed?) Computation remarks Never calculate on the frontend node, mini tests are ok Never start jobs on the compute nodes by hand, always via queuing system Sascha Frick (PHC) HPC cluster pauli 02/05/2012 7 / 24
Access to pauli New Hardware Needed Account Account of computation center (suphcxxx) Old named accounts of dirac One account per workgroup for trainees, bachelors Access methods Only the frontend is directly accessable (pauli.phc.uni-kiel.de) Computation nodes are accessable from frontend (only for debugging) Access via SSH/SCP (On windows via PuTTY, WinSCP) Right now home directories are not accessable via Samba share (needed?) Computation remarks Never calculate on the frontend node, mini tests are ok Never start jobs on the compute nodes by hand, always via queuing system Sascha Frick (PHC) HPC cluster pauli 02/05/2012 7 / 24
Access to pauli New Hardware Needed Account Account of computation center (suphcxxx) Old named accounts of dirac One account per workgroup for trainees, bachelors Access methods Only the frontend is directly accessable (pauli.phc.uni-kiel.de) Computation nodes are accessable from frontend (only for debugging) Access via SSH/SCP (On windows via PuTTY, WinSCP) Right now home directories are not accessable via Samba share (needed?) Computation remarks Never calculate on the frontend node, mini tests are ok Never start jobs on the compute nodes by hand, always via queuing system Sascha Frick (PHC) HPC cluster pauli 02/05/2012 7 / 24
New Hardware Computation center account Formular For new accounts download computation center formular 1 http://www.rz.uni-kiel.de/anmeldung/formulare/form1.pdf For Institutsrechner fill in pauli All existing suphc accounts allready have pauli access Sascha Frick (PHC) HPC cluster pauli 02/05/2012 8 / 24
New Hardware Computation center account Formular For new accounts download computation center formular 1 http://www.rz.uni-kiel.de/anmeldung/formulare/form1.pdf For Institutsrechner fill in pauli All existing suphc accounts allready have pauli access Sascha Frick (PHC) HPC cluster pauli 02/05/2012 8 / 24
PuTTY New Hardware Download PuTTY client Official download address: http://www.chiark.greenend.org.uk/ sgtatham/putty/download.html Choose Windows x86 version (putty.exe) Login to pauli.phc.uni-kiel.de (from PHC/PCTC net) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 9 / 24
PuTTY New Hardware Download PuTTY client Official download address: http://www.chiark.greenend.org.uk/ sgtatham/putty/download.html Choose Windows x86 version (putty.exe) Login to pauli.phc.uni-kiel.de (from PHC/PCTC net) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 9 / 24
WinSCP New Hardware Download WinSCP client Official download address: http://winscp.net/eng/download.php Don t let much advertisment distract you (look for [Dowload WinSCP], choose Installation package) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 10 / 24
WinSCP New Hardware Download WinSCP client Official download address: http://winscp.net/eng/download.php Don t let much advertisment distract you (look for [Dowload WinSCP], choose Installation package) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 10 / 24
Outline Folder Structure and Software 1 About this seminar 2 New Hardware 3 Folder Structure and Software 4 Running calculations on pauli Sascha Frick (PHC) HPC cluster pauli 02/05/2012 11 / 24
Folder Structure and Software Folder Structure on pauli On the frontend Home directories /home/suphcxxx (dayly backuped) Software directories /home/software or /home/software_hartke (licenses) No scratch directory PBS based queuing system: torque, no time limit, no cpu limit, max 20 jobs per user Ganglia Monitoring System: http://pauli.phc.uni-kiel.de/ganglia/ On the computation nodes Home and software directories mounted from frontend Local scratch directory /scratch, /scr1, /scr2 (all the same) Mindset: Save computation data locally on the computation node to prevent network traffic Sascha Frick (PHC) HPC cluster pauli 02/05/2012 12 / 24
Folder Structure and Software Folder Structure on pauli On the frontend Home directories /home/suphcxxx (dayly backuped) Software directories /home/software or /home/software_hartke (licenses) No scratch directory PBS based queuing system: torque, no time limit, no cpu limit, max 20 jobs per user Ganglia Monitoring System: http://pauli.phc.uni-kiel.de/ganglia/ On the computation nodes Home and software directories mounted from frontend Local scratch directory /scratch, /scr1, /scr2 (all the same) Mindset: Save computation data locally on the computation node to prevent network traffic Sascha Frick (PHC) HPC cluster pauli 02/05/2012 12 / 24
Folder Structure and Software Mini crash-course Linux Basic Linux commands Login to machine: ssh <userid>@pauli.phc.uni-kiel.de List directory content: ls <dirname> (without <dirname> current) Change / create Directory: cd <dirname> / mkdir <dirname> Copy file: cp <oldfile> <newfile> (cp -r for directories) Remove file: rm <file> (rm -r for directories) Remote Copy: scp <userid>@host:/path/to/<oldfile> <newfile> Move file: mv <file> path/to/<newfile> Sow file content: cat <filename> (more <filename> for long files) Editors: vim, emacs (look for seperate tutorial) Table of processes: top (quit with q) List procecces: ps (e.g. ps aux grep <userid> for user s processes) Search in file: grep "<searchphrase>" <filename> (case sensitive) More info for a command: man <command> Sascha Frick (PHC) HPC cluster pauli 02/05/2012 13 / 24
Folder Structure and Software Mini crash-course PBS Basic PBS commands Submit job: qsub <scriptname> List jobs: qstat -a (-n to show which nodes are used) Delete job: qdel <jobid> List node infos: pbsnodes -a PBS Options (usually used in script) Job name: -N <jobname> Output/Error file: -o <outputfile> / -e <errorfile> Mail Settings: -m n/a/b/e (n=none, a=abort, b=begin, e=end) Mail Address: -M <mailaddress> Memory: -l mem=<num>gb CPU s: -l nodes=<number of nodes>:ppn=<cpus per node> Wall time: -l walltime=hh:mm:ss ( = <job duration>) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 14 / 24
Folder Structure and Software Mini crash-course PBS Basic PBS commands Submit job: qsub <scriptname> List jobs: qstat -a (-n to show which nodes are used) Delete job: qdel <jobid> List node infos: pbsnodes -a PBS Options (usually used in script) Job name: -N <jobname> Output/Error file: -o <outputfile> / -e <errorfile> Mail Settings: -m n/a/b/e (n=none, a=abort, b=begin, e=end) Mail Address: -M <mailaddress> Memory: -l mem=<num>gb CPU s: -l nodes=<number of nodes>:ppn=<cpus per node> Wall time: -l walltime=hh:mm:ss ( = <job duration>) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 14 / 24
Folder Structure and Software Mini crash-course PBS Basic PBS commands Submit job: qsub <scriptname> List jobs: qstat -a (-n to show which nodes are used) Delete job: qdel <jobid> List node infos: pbsnodes -a PBS Options (usually used in script) Job name: -N <jobname> Output/Error file: -o <outputfile> / -e <errorfile> Mail Settings: -m n/a/b/e (n=none, a=abort, b=begin, e=end) Mail Address: -M <mailaddress> Memory: -l mem=<num>gb CPU s: -l nodes=<number of nodes>:ppn=<cpus per node> Wall time: -l walltime=hh:mm:ss ( = <job duration>) Sascha Frick (PHC) HPC cluster pauli 02/05/2012 14 / 24
Folder Structure and Software General Preparation Prepare system by hand Create passwordless login to compute nodes Login to pauli: ssh <userid>@pauli.phc.uni-kiel.de Create DSA Keypair: ssh-keygen -t dsa -f /.ssh/id_dsa -N "" Allow this key: cat /.ssh/id_dsa.pub» /.ssh/authorized_keys Add nodes to known_hosts file Add to user s known_hosts (usind prepared file): cat /home/software/known_hosts» /.ssh/known_hosts Create user s scratch directory Needs to be executed for every compute node (pauli01-pauli06) Execute: ssh pauli0x "mkdir -p /scratch/<userid>" Or use prepared script Execute: /home/software/prepare_for_pauli.sh Sascha Frick (PHC) HPC cluster pauli 02/05/2012 15 / 24
Folder Structure and Software General Preparation Prepare system by hand Create passwordless login to compute nodes Login to pauli: ssh <userid>@pauli.phc.uni-kiel.de Create DSA Keypair: ssh-keygen -t dsa -f /.ssh/id_dsa -N "" Allow this key: cat /.ssh/id_dsa.pub» /.ssh/authorized_keys Add nodes to known_hosts file Add to user s known_hosts (usind prepared file): cat /home/software/known_hosts» /.ssh/known_hosts Create user s scratch directory Needs to be executed for every compute node (pauli01-pauli06) Execute: ssh pauli0x "mkdir -p /scratch/<userid>" Or use prepared script Execute: /home/software/prepare_for_pauli.sh Sascha Frick (PHC) HPC cluster pauli 02/05/2012 15 / 24
Folder Structure and Software General Preparation Prepare system by hand Create passwordless login to compute nodes Login to pauli: ssh <userid>@pauli.phc.uni-kiel.de Create DSA Keypair: ssh-keygen -t dsa -f /.ssh/id_dsa -N "" Allow this key: cat /.ssh/id_dsa.pub» /.ssh/authorized_keys Add nodes to known_hosts file Add to user s known_hosts (usind prepared file): cat /home/software/known_hosts» /.ssh/known_hosts Create user s scratch directory Needs to be executed for every compute node (pauli01-pauli06) Execute: ssh pauli0x "mkdir -p /scratch/<userid>" Or use prepared script Execute: /home/software/prepare_for_pauli.sh Sascha Frick (PHC) HPC cluster pauli 02/05/2012 15 / 24
Folder Structure and Software General Preparation Prepare system by hand Create passwordless login to compute nodes Login to pauli: ssh <userid>@pauli.phc.uni-kiel.de Create DSA Keypair: ssh-keygen -t dsa -f /.ssh/id_dsa -N "" Allow this key: cat /.ssh/id_dsa.pub» /.ssh/authorized_keys Add nodes to known_hosts file Add to user s known_hosts (usind prepared file): cat /home/software/known_hosts» /.ssh/known_hosts Create user s scratch directory Needs to be executed for every compute node (pauli01-pauli06) Execute: ssh pauli0x "mkdir -p /scratch/<userid>" Or use prepared script Execute: /home/software/prepare_for_pauli.sh Sascha Frick (PHC) HPC cluster pauli 02/05/2012 15 / 24
Basic Setup Folder Structure and Software DEMO Sascha Frick (PHC) HPC cluster pauli 02/05/2012 16 / 24
Outline Running calculations on pauli 1 About this seminar 2 New Hardware 3 Folder Structure and Software 4 Running calculations on pauli Sascha Frick (PHC) HPC cluster pauli 02/05/2012 17 / 24
Running calculations on pauli Running a Gaussian Job Installed versions Gaussian03 and 09 are installed on pauli, run parallel on one node Generally not useful to use more than 8-16 cores without testing Install directory: /home/software/g0x/g0x/ X=3,9 Example PBS scripts: /home/software/g0x/pbs_script Job execution Create a folder for each calculation in your home directory Create a.com or.gjf file or copy it via SCP/WinSCP Make a local copy of the prepared gaussian PBS script cp /home/software/g0x/pbs_script. Edit the PBS script according to the comments in the script Start job via qsub pbs_script and look for your job via qstat All output is local on the compute node. File transfer to and from the compute node is done in the script automatically Sascha Frick (PHC) HPC cluster pauli 02/05/2012 18 / 24
Running calculations on pauli Running a Gaussian Job Installed versions Gaussian03 and 09 are installed on pauli, run parallel on one node Generally not useful to use more than 8-16 cores without testing Install directory: /home/software/g0x/g0x/ X=3,9 Example PBS scripts: /home/software/g0x/pbs_script Job execution Create a folder for each calculation in your home directory Create a.com or.gjf file or copy it via SCP/WinSCP Make a local copy of the prepared gaussian PBS script cp /home/software/g0x/pbs_script. Edit the PBS script according to the comments in the script Start job via qsub pbs_script and look for your job via qstat All output is local on the compute node. File transfer to and from the compute node is done in the script automatically Sascha Frick (PHC) HPC cluster pauli 02/05/2012 18 / 24
Running calculations on pauli Running a Gaussian Job DEMO Sascha Frick (PHC) HPC cluster pauli 02/05/2012 19 / 24
Running calculations on pauli Running a Turbomole Job Installed version A MPI parallel Turbomole6.4 is installed on pauli Generally not useful to use more than 8-16 cores without testing Install directory: /home/software/turbomole6.4/turbomole/ Example PBS script: /home/software/turbomole6.4/pbs_script Job execution Create a folder for each calculation in your home directory Run define or copy input files via SCP/WinSCP Make a local copy of the prepared turbomole PBS script cp /home/software/turbomole6.4/pbs_script. Edit the PBS script according to the comments in the script Start job via qsub pbs_script and look for your job via qstat All output is in submit directory to ensure MPI process communication Sascha Frick (PHC) HPC cluster pauli 02/05/2012 20 / 24
Running calculations on pauli Running a Turbomole Job Installed version A MPI parallel Turbomole6.4 is installed on pauli Generally not useful to use more than 8-16 cores without testing Install directory: /home/software/turbomole6.4/turbomole/ Example PBS script: /home/software/turbomole6.4/pbs_script Job execution Create a folder for each calculation in your home directory Run define or copy input files via SCP/WinSCP Make a local copy of the prepared turbomole PBS script cp /home/software/turbomole6.4/pbs_script. Edit the PBS script according to the comments in the script Start job via qsub pbs_script and look for your job via qstat All output is in submit directory to ensure MPI process communication Sascha Frick (PHC) HPC cluster pauli 02/05/2012 20 / 24
Running calculations on pauli Running a Turbomole Job DEMO Sascha Frick (PHC) HPC cluster pauli 02/05/2012 21 / 24
Running calculations on pauli Slides download http://ravel.pctc.uni-kiel.de/ under section TEACHING at the bottom Sascha Frick (PHC) HPC cluster pauli 02/05/2012 22 / 24
The End Running calculations on pauli Thank you for your attention! Don t forget to pick up your AK-account passwords! Sascha Frick (PHC) HPC cluster pauli 02/05/2012 23 / 24
Running calculations on pauli Happy computing Happy computing Sascha Frick (PHC) HPC cluster pauli 02/05/2012 24 / 24