EUROPLEXUS MPI Version at the JRC Martin Larcher

Size: px
Start display at page:

Download "EUROPLEXUS MPI Version at the JRC Martin Larcher"

Transcription

1 EUROPLEXUS MPI Version at the JRC Martin Larcher PUBSY JRC56677

2 The mission of the IPSC is to provide research results and to support EU policy-makers in their effort towards global security and towards protection of European citizens from accidents, deliberate attacks, fraud and illegal actions against EU policies. European Commission Joint Research Centre Institute for the Protection and Security of the Citizen Contact information Address: Martin Larcher, T.P. 480, Joint Research Centre, I Ispra, ITALY Tel.: Fax: Legal Notice Neither the European Commission nor any person acting on behalf of the Commission is responsible for the use which might be made of this publication. A great deal of additional information on the European Union is available on the Internet. It can be accessed through the Europa server JRC European Union, 2010 Reproduction is authorised provided the source is acknowledged Printed in Italy

3 CONTENTS 1 Introduction MPI-Version of EUROPLEXUS under Windows Installation Compilation Debugging Scripts Running the MPI version MPI on the Linux Cluster New JRC cluster Log in SGE queuing system Environment used Updating the sources Compiling Standard Compiling Compiling without optimization Compiling ParMetis Running Modications to scripts and les from the CEA environment Standard version MPI-version Benchmarks References Appendix Batch scripts EUROPLEXUS input les... 28

4 1 Introduction The explicit Finite Element code EUROPLEXUS is written for the calculation of fast dynamic fluid-structure interactions. This code has been developed in collaboration between the French Commissariat à l'energie Atomique (CEA Saclay) and the Joint Research Centre of the European Union (JRC Ispra). In recent years, the use of multi-processor machines has become more and more popular especially using the Linux operating system. But also under Windows more and more CPU-consuming programs are running on more than one thread. While OpenMP (shared memory) can mainly only be used on one machine, MPI (distributed memory) calculations are possible also on much bigger cluster systems. The EUROPLEXUS code was adapted to MPI calculations in recent years at CEA for their Linux cluster system. This technical note presents the developments for MPI under Windows at JRC and the adaptation of the compilation and running procedures to the new JRC HPC Linux cluster. The description of OpenMP and MPI can be found in the following specications: 4

5 2 MPI-Version of EUROPLEXUS under Windows There are several implementations of the MPI package available under Windows. Here used is the package MPICH2, which is developed by the Argonne National Laboratory in collaboration with several partners. MPICH2 is a high-performance and widely portable implementation of the Message Passing Interface (MPI) standard, which supports different computation platforms. The program is OpenSource an can be downloaded from: 2.1 Installation This program has to be installed (administrator rights are needed) on each machine, where one the part of the calculation should be performed. The Windows version of the code uses the Microsoft Installer and is therefore self-installing. After installation (a password has to be dened to access the machine through the network) one can nd in the Start menu a new folder, which contains a small wmpicong program for the conguration of the environment. For example, with this tool the other computers in the network can be added to the environment. More important is the tool wmpiexec, which can be used to start the calculation of the MPI version of the code. This is described later. 2.2 Compilation The following list describes the manual compilation of EUROPLEXUS for MPI under Windows. Note however that an up-to-date MPI version for Windows is automatically compiled each night on the EUROPLEXUS server at the JRC. 1. Take all the source les using the module M_DOMAINE_MPI epx_grep g M_DOMAINE_MPI 2. Comment out the call to ParMetis in the source le part_auto.ff. This is needed since the ParMetis library (a library for automatic CPU load balancing) is not available under Windows. 3. Compile all these les using epx_cmp M. 4. Link the les using epx_lk -M. 5

6 5. Start the interactive program wmpiexec.exe, if available. If the interactive version is not available, the console executable mpiexec.exe must be used instead. The commands of this executable can be found in the OpenMPI manual. The command from the Windows Command Prompt is for example: C:\Program Files\MPICH2\bin\mpiexec.exe -n 2 -noprompt D:\Users\larchma\Projects\epx.exe benchmark.epx where n 2 means 2 threads and exp.exe is the name of the MPI executable produced before. 6. In the interactive application wmpiexec choose epx.exe (in the correct folder) as the application adding the benchmark le in the command line. 7. OpenMPI allows also dening other hosts in the network for the calculation. The names of the computers (hosts) can be specied when the checkbox more options is activated. The structure of the directories used must be the same on all used computers. The use of the automatically compiled version is described later (see epx_bench in Section 2.4). 2.3 Debugging Debugging is not possible using Visual Studio A special extension would be needed. Instead, to localize errors, one possibility is to compile all source les under MPI to get information where program failure is located. 6

7 2.4 Scripts epx_cmp The utility epx_cmp can be used to compile a set of les. This utility is extended so that also an MPI version can be compiled. This can be done using the keyword M. epx_cmp o M loopelm epx_lk The utility epx_lk can be used to link a set of object les to the library on the server. This utility is extended so that also an MPI version can be link. This can be done using the keyword M. epx_lk o M epx_bench The utility epx_bench can be used to start a calculation. This utility is extended so that also an MPI calculation can be started. This can be done using the keyword M, followed by the number of processes used. epx_bench M 4 l bm_str_eros epx_evol and epx_evol_64 The utilities epx_evol and epx_evol_64 are used to evolve the EUROPLEXUS version each night on the 32bit and 64bit servers respectively. They update the libraries, the executables and test all the benchmarks. These utilities are extended so that also MPI versions are created. 2.5 Running the MPI version The MPI version can be started using the utility epx_bench. The set of nite elements in the numerical model must be divided ( domain decomposition ) by hand since the utility ParMetis is not available for Windows. The element subsets (sub-domains) must be dened using the keyword STRU followed by the number of subsets. The subsets are then dened using the keyword DOMA followed by a /LECT/. STRU 2 DOMA LECT a_glass_a a_airb_a v1_a TERM DOMA LECT a_glass_b a_airb_b v1_b TERM The interaction between the subsets can, for example, be dened using the following input: 7

8 INTE LINK NOMU The listing is printed for each process using the extensions.listingnn, where NN is the thread number (01, 02, etc.). The listing from the rst thread (thread zero) is written to the le with the standard extension.listing. It is recommend to avoid the output of the log-le since this is very time consuming for the MPI version. Postprocessing is not yet possible at the moment with the MPI version, i.e. a calculation is stopped when it reaches a SUIT command. MPI often gives errors or cannot be used in combination with SPLT NONE. 8

9 3 MPI on the Linux Cluster 3.1 New JRC cluster The JRC installed in November 2009 a new Linux-based cluster with 256 processors (see [2]). The basic characteristics of the cluster are: 32 Physical nodes with 2 processors per node = 64 processors. Intel Nehalem-EP(quad core) processors at frequency of 2.93 GHz (64 X 4 = 256 Cores) 24 GB RAM per node HPC interconnect Inniband QDR(4X) Operating System Linux Centos. Sun Grid Engine Cluster Scheduler and queue management Intel Cluster toolkit FORTRAN compiler edition for Linux GNU compiler suite openmpi GANGLIA, Webmin IPtables enabled The HPC cluster (Compute nodes, storage and networking components) is installed in two watercooled racks (Rittal RimatriX) in the 05L Datacentre. The system is powered by the Datacentre UPS. The Cluster is connected to the JRC Ispra Network. Expected overall power consumption is 14 KW. The cluster will benet from all services in the data centre in building 5L in Ispra (Gigabit campus network backbone, storage, GEANT and other services and infrastructure) Log in The JRC cluster uses activated IP-tables on the master node to grant access to users. Send an to the HPC team to activate the IP needed. Login user account and password can also be requested in this way. The user account used here is europlx. The master node can be reached via the network using SSH (for example using PUTTY, with the following IP address: (hpc01p). Access to the le system can be done using SFTP (FTP over SSH). For 9

10 Windows users FileZilla (http://lezilla-project.org/) is recommended. The parameters used for the connection are given in Figure 1. Figure 1: Parameters for FileZilla to reach the JRC HPC cluster SGE queuing system A SGE queuing system is installed to allow the distribution of the capacities of the cluster to several users. The commands are described in detail on this webpage: html?content-type=text/html A job can be submitted for example by the following command: qsub -w e -pe openmpi $mpicpu -q prod01_q./mpi_europlx.csh For EUROPLEXUS jobs a batch script is written that simplies the submission of jobs. The status of the jobs can be inspected using the command qstat. Jobs can be stopped using qdel followed by the queue number of the job, which can be taken from the list produced by qstat. 3.2 Environment used On the new JRC cluster, Linux scripts similar to the ones available at CEA are used. Compilation of the EUROPLEXUS source code should be done under the user europlx. The following structure of the folders is given in the home directory (/nfs/staging/europlx) of the user europlx: 10

11 Biblio: contains the executable generated, and several libraries (ParMetis, etc). Three subfolders for 32bit, 64bit, and 64bit non-optimized (64_o0) are used. bin: contains the scripts for compiling, linking etc. Epx_Evol contains les for the evolution procedure, not used so far. Miroir contains the sources (.ff,.inc, bm_,...) Some of the CEA scripts are adapted for use at JRC since some paths are different. This is described in Section Updating the sources The sources, includes and benchmarks must be copied in the appropriate sub-folders of the Miroir folder. Source les are always written in lower case, include le are always written in upper case, also the extension.inc. The update can be done at present only by hand. 3.4 Compiling Standard Compiling Standard compiling is started by the command ~/bin/compiler_tout -nobench tee compiler_tout.log This compiles the whole sources without the MPI keyword. An executable (64bit non MPI) is produced: ~/Biblio/64/europlexus_linux. The option nobench can be used to suppress the test of the benchmarks. After this, the compilation of the MPI part of the code must be started using ~/bin/compiler_tout_mpi -nobench tee compiler_tout_mpi.log An executable (64bit MPI) is produced: ~/Biblio/64/mpi/europlex_mpi Compiling without optimization A non-optimized version of the code can be generated using ~/bin/compiler_tout_o0 -nobench tee compiler_tout.log An executable (64bit non MPI) is produced: ~/Biblio/64_o0/europlexus_linux. ~/bin/compiler_tout_mpi_o0 -nobench tee compiler_tout_mpi.log An executable (64bit MPI) is produced: ~/Biblio/64_o0/mpi/europlex_mpi. 11

12 If the non-optimized code should be used, epx_launch_mpi_o0 must be used instead of epx_launch_mpi. To compile the code without optimization several batch scripts in the folder bin are copied to a version _o0. They are presented in the Appendix Compiling ParMetis ParMetis is used to split the geometry of a multi-domain calculation automatically. It must be compiled only once. ParMetis source can be taken from this webpage: The source le has to be extracted to a folder. The le Makele.in may be changed to dene a different path for the parallel C++ compiler mpicc or for the standard libraries. After that, just type make to build the libraries (libmetis.a and libparpetis.a). ParMetis is called from EUROPLEXUS using the C routine c_fortran_parmetis.c. This routine must be added to the ParMetis library, which is used at link step to produce an MPI executable of EUROPLEXUS. To compile this, the following procedure is recommended: 1. Compile c_fortran_parmetis.c, using includes from installed OpenMPI library: mpicc -c c_fortran_parmetis.c -I parmetislib -I Programs This should be done in the root directory of ParMetis. 2. Add the object le to the ParMetis library: ar ru $PARMETIS_L/libparmetis.a c_fortran_parmetis.o where $PARMETIS_L is the directory where the ParMetis libraries are placed. The updated versions of the library libparmetis.a must be copied in the folders ~/Biblio/32 or ~/Biblio/64 respectively. 3.5 Running The script epx_launch_mpi starts a parallel MPI run (OpenMPI) directly without putting it into the job queue. This is still allowed at the moment but maybe in the future only the queued version of the command will be available. epx_launch_mpi -np 8 -data /nfs/staging/europlx/test/train1.epx 12

13 -np introduces the number of processes -data introduces the EUROPLEXUS input le The non optimized version of the code can be started by epx_launch_mpi_o0 -np 8 -data /nfs/staging/europlx/test/train1.epx Alternatively, the script epx_queue submits a job to the queue of the system. epx_queue -d -np 4 -data erp_1020_04.epx -d starts the calculation on the development queue. Starting a job in the production queue is not yet possible. -np introduces the number of processes -data introduces the EUROPLEXUS input le Both scripts change several lenames to their corresponding fort.xx names. For example, the.msh le is renamed to fort.9. In addition, links to these les are written in the /tmp folder of each calculation node. This is done so that the les are accessible in input from each node in the same form. 3.6 Modications to scripts and les from the CEA environment Standard version e-plexus.chemin The link to the FORTRAN compiler is changed (line 16):. /nfs/compilers/intel/ictce/3.2.1/ia32/compiler/11.1 /038/bin/ifortvars.sh ia32 epx_cft The optimization (line 16) is set to OPTIM="-O1" epx_link Line 59: the optimization is set to O1 and the libraries liblapack and libblas are set here with the complete path: ifort -o $EXE -O1 -Vaxlib $OPT $MPI_LINK_FLAGS $OMP_LINK_FLAGS *.o \ 13

14 LIBS $DLIB/libsplib.a $ParMetis_FLAGS -L/usr/lib /usr/lib/liblapack.so.3 /usr/lib/libblas.so MPI-version epx_cft The optimization (line 16) is set to OPTIM="-O1" The link to the MPI-compiler (line 39) is changed to MPI_CFLAGS=$(mpif90 --showme:compile) epx_link Line 59: the optimization is set to O1 and the libraries liblapack and libblas are set here with the complete path: ifort -o $EXE -O1 -Vaxlib $OPT $MPI_LINK_FLAGS $OMP_LINK_FLAGS *.o \ LIBS $DLIB/libsplib.a $ParMetis_FLAGS -L/usr/lib /usr/lib/liblapack.so.3 /usr/lib/libblas.so.3 14

15 4 Benchmarks The speed up of the MPI version is investigated using a benchmark with laminated glass (emi_ls25.epx, see Larcher [3]). The input is shown in the Appendix. EUROPLEXUS version 1800 of 30 November 2009 is used for the calculations. The calculations were also done on an older cluster of the JRC (Linux, 32bit, 16 processors). Calculation tim [s] JRC Linux Cluster, no optimization, Intel 11.1 Windows, optimization, sm37, Intel 10 old JRC Linux Cluster, no optimization, Intel 11.1 JRC Linux Cluster, optimization, Intel 11.1 Windows, no optimization, 64 bit, Intel Number of processes Figure 2: Calculation for different architectures and number of processes Figure 2 shows the influence of the number of processes and of the architecture on the CPU time. It can be seen that the calculation time of the old JRC cluster is smallest, when 4 processes are used. For the new JRC cluster the number of processes with the smallest calculation time is 8. By comparison between the Windows architecture and the Linux cluster it may be observed that the new JRC cluster needs on 8 nodes a CPU time of just one tenth of the one of the Windows system. 15

16 1.2 relative calculation time JRC Linux Cluster, no optimization Windows, optimization, sm37 old JRC Linux Cluster, no optimization, Intel Number of processes Figure 3: Calculation for different architectures and number of processes relative to the one on the same architecture on one node In Figure 3 the calculation time is set relative to the one on the same architecture on one node. The speedup of the new cluster is better than that on the old cluster and than that on the Windows system. 16

17 5 References [1] EUROPLEXUS, User s Manual, online version. [2] Antonio Puertas Gallardo, JRC Data Centre Services: High Performance Computing Cluster Pilot. Internal Document. Ispra [3] Larcher, M.: Simulation of Several Glass Types Loaded by Air Blast Waves. JRC Technical Note, Pubsy JRC48420, Ispra

18 6 Appendix 6.1 Batch scripts The following scripts are mainly changed for the 64bit version of EUROPLEXUS and for the compilation using optimization 0. Changes are marked yellow. compiler_tout! /bin/sh Ce shell recompile tout les sources fortran de plexus, cree les nouveaux chiers '$LIBL' et '$EXE', teste les benchs (si pas d'option nobench) puis faire la mise a jour de '$LIBRARY', 'MODULE', *.mod, bm* if [ $ -gt 1 -o $1. = "-h." ];then "Usage: $0 [-nobench] " exit if [ $1. = "-nobench." ]; then BEN="no" "Compiler tout Europlexus sans run_bench!!" shift BEN="yes" "Compiler tout Europlexus avec run_bench!!". ~europlx/bin/e-plexus.chemin TMPDIR=/tmp/Compil_Epx TMPDIR=$PWD/Compil_Epx.$$ rm -rf $TMPDIR 2>/dev/null mkdir $TMPDIR cd $TMPDIR set -vx cp -pr $DSOURCE. Phase de compilation cd./source epx_make -j 4 compil if [ $?!= 0 ];then "Erreur dans la compilation des fortran" exit 1 On cree un nouveau library mkdir../biblio LIBL=./${LIBRARY*/ ar vq../biblio/$libl *.o if [ $?!= 0 ];then "Erreur dans la creation du bibliotheque " exit 2 On cree un load module sur $TMPDIR/Biblio cd../biblio cp -p../source/main.o. epx_link -local $LIBL if [ $?!= 0 ];then "Erreur dans la creation du module " exit 3 "Repertoire : $PWD" ls -ltr if [ $BEN = "yes" ]; then On execute les benchs mkdir../plx_test cd../plx_test ln -s../biblio/$exe. "Repertoire : $PWD" ls -ltr run_bench -essai if [ $?!= 0 ] ;then "STOP: Erreur dans les benchs" exit 4 ">>> Fin des benchmarks " Mise a jour de $LIBL et du module cd../biblio chmod 755 $EXE $LIBL mv $DLIB/europlex_linux.OLD1 $DLIB/europlex_linux.OLD2 mv $DLIB/europlex_linux $DLIB/europlex_linux.OLD1 mv $EXE $MODULE mv $LIBL $LIBRARY "OK: $LIBRARY et $MODULE sont mis a jour" ls -l $DLIB Copier les chiers *.mod cd../source mv *.mod $DMOD "OK: les chiers.mod sont mis a jour " date if [ $BEN = "yes" ]; then Mise a jour des benchs cd../plx_test rm *.ali *.tps *.k2000 *log *pun 2>/dev/null chmod 644 bm_*.listing bm_*.ps Bm_* mv bm_*.listing bm_*.ps Bm_*.lst $DBENCH "OK: les benchs sont mis a jour " date cd.. " epx_include_lst >> Mise a jour de $INC_lst" "Machine `uname -n` numero de la version demdat.ff" grep "DATVER=" $DSOURCE/demdat.ff grep "NVERS=" $DSOURCE/demdat.ff cd rm -rf $TMPDIR "remove du repertoire $TMPDIR " exit 0 compiler_tout_mpi! /bin/sh Ce shell recrée les modules, bibliotheque MPI, On suppose que la version STANDARD est a jour avant de passer ce shell il teste les benchs (si pas d'option nobench) nproc=2 BEN="yes" while [ "$1.xy"!= ".xy" ] ;do 18

19 case $1 in -h) "Usage: $0 [-nobench] [-nproc nbproc] " ;exit ;; -nobench) BEN="no" ;; -nproc) shift ; nproc=$1 ;; *) "option inconnue : $1" ;exit 1;; esac shift done. ~europlx/bin/e-plexus.chemin if [! -r $LIBRARY_MPI ];then cas ou $LIBRARY_MPI n'exite pas on execute la procedure il faut creer qd meme $LIBRARY_MPI, sinon plantage dans gmake touch --date " " $LIBRARY_MPI $DSOURCE/demdat.ff est plus recent que $LIBRARY_MPI? wwc=`nd $DSOURCE -name demdat.ff -newer $LIBRARY_MPI wc -l` if [ $wwc -eq 0 ]; then "'${LIBRARY_MPI*/' up to date" ls -l $DSOURCE/demdat.ff $LIBRARY_MPI exit 0 on cree le repertoire de travail TMP_mpi=$PWD/compil_mpi.$$ rm -rf $TMP_mpi 2>/dev/null mkdir $TMP_mpi $TMP_mpi/Compil copier les sources contenant "IF MPI" cd $DSOURCE egrep -lw "^CIF +MPI" *ff xargs -i cp -p { $TMP_mpi/Compil numero de la version $nvers nvers=`egrep "NVERS= " demdat.ff awk '{print $2'` vnum=` $nvers ~/bin/proc_evol/fnumber1.pl 0` hb Creation de $RAPPORT hb DRAP_mpi=$DRAP/../Rapports_mpi hb RAPPORT=$DRAP_mpi/"mpi_"$vnum.txt.$$ hb touch $RAPPORT hb exec 2>&1 1>> $RAPPORT hb "=== BEGIN: Evolution MPI version $vnum : `date +"%d %b %Y ï %Hh:%Mmn"` ===" cd $TMP_mpi/Compil "Repertoire `pwd` avant epx_depmod " ls -l epx_depmod "Repertoire `pwd`" ls -l "START: ====== Compiler tout MPI: time: `date +"%d %b %Y %Hh:%Mmn:%Ss"` ========" Phase de compilation " 1) compilation : time: `date +"%Hh:%Mmn:%Ss"`" epx_make -mpi -j 2 compil if [ $?!= 0 ];then "Erreur dans la compilation des sources fortran" exit 1 Phase de LINK " 2) Phase link : time: `date +"%Hh:%Mmn:%Ss"`" LIBL=./${LIBRARY_MPI*/ MODL=./${EXE_MPI*/ rm $LIBL 2>/dev/null ar q $LIBL *.o 2>&1 rm *.o ar -x $LIBL main.o sh -vx epx_link -mpi -local_mpi $LIBL if [ $?!= 0 ];then "Erreur dans la creation du module executable " exit 3 "Module executable '$MODL' est cree" if [ $BEN = "yes" ]; then On execute les benchs MPI rm -rf../plx_test 2>/dev/null mkdir../plx_test cd../plx_test ln -s../compil/$modl ">>> Debut des benchmarks: time: `date +"%Hh:%Mmn:%Ss"` " "Repertoire `pwd`" nproc= de processeur (par defaut nproc=2) run_bench -essai -mpi $nproc if [ $?!= 0 ] ;then "Erreur dans les benchs" exit 4 ">>> Fin des benchmarks: time: `date +"%Hh:%Mmn:%Ss"`" Mise a jour de library et du module cd../compil "Mise a jour de library et du module" chmod 755 $LIBL $MODL mv $LIBL $LIBRARY_MPI mv $MODL $MODULE_MPI Mise a jour des *.mod rm -rf $DMOD_MPI 2>/dev/null mkdir $DMOD_MPI mv *.mod $DMOD_MPI " " "Contenu du repertoire $DLIB_MPI" ls -ltr $DLIB_MPI "Machine `uname -n` numero de la version demdat.ff" grep "DATVER=" $DSOURCE/demdat.ff grep "NVERS=" $DSOURCE/demdat.ff cd rm -rf $TMP_mpi "remove du repertoire $TMP_mpi " "=== END: Compiler_tout MPI version $vnum : `date +"%d %b %Y %Hh:%Mmn:%Ss"`" exit 0 compiler_tout_mpi_o0! /bin/sh Ce shell recrée les modules, bibliotheque MPI, 19

20 On suppose que la version STANDARD est a jour avant de passer ce shell il teste les benchs (si pas d'option nobench) nproc=2 BEN="yes" while [ "$1.xy"!= ".xy" ] ;do case $1 in -h) "Usage: $0 [-nobench] [-nproc nbproc] " ;exit ;; -nobench) BEN="no" ;; -nproc) shift ; nproc=$1 ;; *) "option inconnue : $1" ;exit 1;; esac shift done. ~europlx/bin/e-plexus.chemin_o0 if [! -r $LIBRARY_MPI ];then cas ou $LIBRARY_MPI n'exite pas on execute la procedure il faut creer qd meme $LIBRARY_MPI, sinon plantage dans gmake touch --date " " $LIBRARY_MPI $DSOURCE/demdat.ff est plus recent que $LIBRARY_MPI? wwc=`nd $DSOURCE -name demdat.ff -newer $LIBRARY_MPI wc -l` if [ $wwc -eq 0 ]; then "'${LIBRARY_MPI*/' up to date" ls -l $DSOURCE/demdat.ff $LIBRARY_MPI exit 0 on cree le repertoire de travail TMP_mpi=$PWD/compil_mpi.$$ rm -rf $TMP_mpi 2>/dev/null mkdir $TMP_mpi $TMP_mpi/Compil copier les sources contenant "IF MPI" cd $DSOURCE egrep -lw "^CIF +MPI" *ff xargs -i cp -p { $TMP_mpi/Compil numero de la version $nvers nvers=`egrep "NVERS= " demdat.ff awk '{print $2'` vnum=` $nvers ~/bin/proc_evol/fnumber1.pl 0` hb Creation de $RAPPORT hb DRAP_mpi=$DRAP/../Rapports_mpi hb RAPPORT=$DRAP_mpi/"mpi_"$vnum.txt.$$ hb touch $RAPPORT hb exec 2>&1 1>> $RAPPORT hb "=== BEGIN: Evolution MPI version $vnum : `date +"%d %b %Y ï %Hh:%Mmn"` ===" cd $TMP_mpi/Compil "Repertoire `pwd` avant epx_depmod " ls -l epx_depmod "Repertoire `pwd`" ls -l "START: ====== Compiler tout MPI: time: `date +"%d %b %Y %Hh:%Mmn:%Ss"` ========" Phase de compilation " 1) compilation : time: `date +"%Hh:%Mmn:%Ss"`" epx_make_o0 -mpi -j 2 compil if [ $?!= 0 ];then "Erreur dans la compilation des sources fortran" exit 1 Phase de LINK " 2) Phase link : time: `date +"%Hh:%Mmn:%Ss"`" LIBL=./${LIBRARY_MPI*/ MODL=./${EXE_MPI*/ rm $LIBL 2>/dev/null ar q $LIBL *.o 2>&1 rm *.o ar -x $LIBL main.o sh -vx epx_link_o0 -mpi -local_mpi $LIBL if [ $?!= 0 ];then "Erreur dans la creation du module executable " exit 3 "Module executable '$MODL' est cree" if [ $BEN = "yes" ]; then On execute les benchs MPI rm -rf../plx_test 2>/dev/null mkdir../plx_test cd../plx_test ln -s../compil/$modl ">>> Debut des benchmarks: time: `date +"%Hh:%Mmn:%Ss"` " "Repertoire `pwd`" nproc= de processeur (par defaut nproc=2) run_bench_o0 -essai -mpi $nproc if [ $?!= 0 ] ;then "Erreur dans les benchs" exit 4 ">>> Fin des benchmarks: time: `date +"%Hh:%Mmn:%Ss"`" Mise a jour de library et du module cd../compil "Mise a jour de library et du module" chmod 755 $LIBL $MODL mv $LIBL $LIBRARY_MPI mv $MODL $MODULE_MPI_o0 Mise a jour des *.mod rm -rf $DMOD_MPI 2>/dev/null mkdir $DMOD_MPI mv *.mod $DMOD_MPI " " "Contenu du repertoire $DLIB_MPI" ls -ltr $DLIB_MPI "Machine `uname -n` numero de la version demdat.ff" grep "DATVER=" $DSOURCE/demdat.ff grep "NVERS=" $DSOURCE/demdat.ff cd rm -rf $TMP_mpi "remove du repertoire $TMP_mpi " "=== END: Compiler_tout MPI version $vnum : `date +"%d %b %Y %Hh:%Mmn:%Ss"`" exit 0 20

21 compiler_tous_o0! /bin/sh Ce shell recompile tout les sources fortran de plexus, cree les nouveaux chiers '$LIBL' et '$EXE', teste les benchs (si pas d'option nobench) puis faire la mise a jour de '$LIBRARY', 'MODULE', *.mod, bm* if [ $ -gt 1 -o $1. = "-h." ];then "Usage: $0 [-nobench] " exit if [ $1. = "-nobench." ]; then BEN="no" "Compiler tout Europlexus sans run_bench!!" shift BEN="yes" "Compiler tout Europlexus avec run_bench!!". ~europlx/bin/e-plexus.chemin_o0 TMPDIR=/tmp/Compil_Epx TMPDIR=$PWD/Compil_Epx.$$ rm -rf $TMPDIR 2>/dev/null mkdir $TMPDIR cd $TMPDIR set -vx cp -pr $DSOURCE. Phase de compilation cd./source epx_make_o0 -j 4 compil if [ $?!= 0 ];then "Erreur dans la compilation des fortran" exit 1 On cree un nouveau library mkdir../biblio LIBL=./${LIBRARY*/ ar vq../biblio/$libl *.o if [ $?!= 0 ];then "Erreur dans la creation du bibliotheque " exit 2 On cree un load module sur $TMPDIR/Biblio cd../biblio cp -p../source/main.o. epx_link_o0 -local $LIBL if [ $?!= 0 ];then "Erreur dans la creation du module " exit 3 "Repertoire : $PWD" ls -ltr if [ $BEN = "yes" ]; then On execute les benchs mkdir../plx_test cd../plx_test ln -s../biblio/$exe. "Repertoire : $PWD" ls -ltr run_bench_o0 -essai if [ $?!= 0 ] ;then "STOP: Erreur dans les benchs" exit 4 ">>> Fin des benchmarks " Mise a jour de $LIBL et du module cd../biblio chmod 755 $EXE $LIBL mv $DLIB/europlex_linux.OLD1 $DLIB/europlex_linux.OLD2 mv $DLIB/europlex_linux $DLIB/europlex_linux.OLD1 mv $EXE $MODULE_o0 mv $LIBL $LIBRARY "OK: $LIBRARY et $MODULE sont mis a jour" ls -l $DLIB Copier les chiers *.mod cd../source mv *.mod $DMOD "OK: les chiers.mod sont mis a jour " date if [ $BEN = "yes" ]; then Mise a jour des benchs cd../plx_test rm *.ali *.tps *.k2000 *log *pun 2>/dev/null chmod 644 bm_*.listing bm_*.ps Bm_* mv bm_*.listing bm_*.ps Bm_*.lst $DBENCH "OK: les benchs sont mis a jour " date cd.. " >> Mise a jour de $INC_lst" epx_include_lst "Machine `uname -n` numero de la version demdat.ff" grep "DATVER=" $DSOURCE/demdat.ff grep "NVERS=" $DSOURCE/demdat.ff cd rm -rf $TMPDIR "remove du repertoire $TMPDIR " exit 0 e-plexus.chemin EUROPLX=~europlx EPATH="$EUROPLX/bin" if [ -z $PATH ]; then export PATH=$EPATH export PATH=$EPATH:$PATH Pb avec l'encodage UTF-8: on passe a ISO export LANG=fr_FR HB Variables d'environnement pour le Fortran Intel (32 ou 64bits). /nfs/compilers/intel/compiler/11.1/059/bin/ifortvars.sh ia64 HB Les ltrages F_AIX="UNIX32 $F_EPLX" F_CRY="CRAY $F_EPLX" F_WIN="WIN $F_EPLX" Les sources M_EPX="$EUROPLX/Miroir" DSOURCE="$M_EPX/source" DINCLUDE="$M_EPX/include" DMANUAL="$M_EPX/manual" DVALIDATE="$M_EPX/validate" DBENCH="$M_EPX/bench" Biblio Version 32 ou 64 bits DLIB="$EUROPLX/Biblio/64" DMOD="$DLIB/Mod" LIBRARY=$DLIB/europlexus_linux.a MODULE=$DLIB/europlex_linux EXE=./epxessai_linux export LANGUAGE=C Version OpenMP DLIB_OMP="$EUROPLX/Biblio_omp" DMOD_OMP="$DLIB_OMP/Mod" LIBRARY_OMP=$DLIB_OMP/europlexus_omp.a MODULE_OMP=$DLIB_OMP/europlex_omp 21

22 EXE_OMP=./epxessai_omp Version MPI DLIB_MPI="$DLIB/mpi" DMOD_MPI="$DLIB_MPI/Mod" LIBRARY_MPI=$DLIB_MPI/europlexus_mpi.a MODULE_MPI=$DLIB_MPI/europlex_mpi EXE_MPI=./epxessai_mpi Evolution EVOLPATH="$EPATH/Proc_Evol" DEVOL="$EUROPLX/Epx_Evol" DTMP="$DEVOL/TMP" INC_lst=$DEVOL/List_Include BAL="$DEVOL/Reception/Boite" DRAP=$DEVOL/Trace_Evolution/Rapports e-plexus.chemin_o0 EUROPLX=~europlx EPATH="$EUROPLX/bin" if [ -z $PATH ]; then export PATH=$EPATH export PATH=$EPATH:$PATH Pb avec l'encodage UTF-8: on passe a ISO export LANG=fr_FR HB Variables d'environnement pour le Fortran Intel (32 ou 64bits). /nfs/compilers/intel/compiler/11.1/059/bin/ifortvars.sh ia64 HB Les ltrages F_AIX="UNIX32 $F_EPLX" F_CRY="CRAY $F_EPLX" F_WIN="WIN $F_EPLX" Les sources M_EPX="$EUROPLX/Miroir" DSOURCE="$M_EPX/source" DINCLUDE="$M_EPX/include" DMANUAL="$M_EPX/manual" DVALIDATE="$M_EPX/validate" DBENCH="$M_EPX/bench" Biblio Version 32 ou 64 bits DLIB="$EUROPLX/Biblio/64_o0" DMOD="$DLIB/Mod" LIBRARY=$DLIB/europlexus_linux.a MODULE=$DLIB/europlex_linux MODULE_o0=$DLIB/europlex_linux_o0 EXE=./epxessai_linux export LANGUAGE=C Version OpenMP DLIB_OMP="$EUROPLX/Biblio_omp" DMOD_OMP="$DLIB_OMP/Mod" LIBRARY_OMP=$DLIB_OMP/europlexus_omp.a MODULE_OMP=$DLIB_OMP/europlex_omp EXE_OMP=./epxessai_omp Version MPI DLIB_MPI="$DLIB/mpi" DMOD_MPI="$DLIB_MPI/Mod" LIBRARY_MPI=$DLIB_MPI/europlexus_mpi.a MODULE_MPI=$DLIB_MPI/europlex_mpi MODULE_MPI_o0=$DLIB_MPI/europlex_mpi_o0 EXE_MPI=./epxessai_mpi_o0 Evolution EVOLPATH="$EPATH/Proc_Evol" DEVOL="$EUROPLX/Epx_Evol" DTMP="$DEVOL/TMP" INC_lst=$DEVOL/List_Include BAL="$DEVOL/Reception/Boite" DRAP=$DEVOL/Trace_Evolution/Rapports epx_cft_o0! /bin/sh. ~europlx/bin/e-plexus.chemin_o0 if [ $ = 0 ] ; then "Compilation des sources d'europlexus" "Ex: $0 -opt1 -opt2 +optval2 celem.ff [x-z]*.ff " "Version pour debugger: $O -0g c" exit 50 MPI_CFLAGS="" OMP_FLAG="" opt="-traceback" OPTIM="-O0" LMPI="0" LOMP="0" LDBX="0" bid1="0" while [ $bid1 = 0 ] ; do case $1 in -V) ifort -V ; exit ;; -mpi) LMPI="1" ; shift ;; -omp) LOMP="1" ; shift ;; -0g) OPTIM="-O0 -g" ; LDBX="1" ; shift ;; -*) opt="$opt $1" ; shift ;; +*) opt="$opt ${1+" ; shift ;; *) bid1="1";; esac done kopt=$opt les includes standards INCLUDES="-I$DMOD -I$DINCLUDE" if [ $LMPI == "1" ] ; then MPI_CFLAGS=$(mpif90 --showme:compile) INCLUDES="-I$DMOD_MPI $INCLUDES" MPI_FLAG="MPI" if [ $LOMP == "1" ] ; then opt="-fpp -openmp $opt" INCLUDES="-I$DMOD_OMP $INCLUDES" OMP_FLAG="OMP" while [ $1"xyz"!= "xyz" ] do c1=${1%.* c=`basename $c1` shift if [ -r $c1.ff ] then ">>>> Compilation $OPTIM $kopt $MPI_FLAG $OMP_FLAG: $c1.ff <<<<" "chier >> $c1.ff << n'existe pas " exit 25 epx_ltre $c1.ff./$c.f $F_AIX T_LINUX $MPI_FLAG if [ $?!= 0 ] ; then exit 5 rm./$c.o 2>/dev/null ifort -c $OPTIM -warn none -auto -u -I./ $opt $MPI_CFLAGS $INCLUDES./$c.f rc=$? if [ $LDBX == "0" ]; then rm./$c.f 2>/dev/null ; if [ $rc!= 0 ] ; then exit $rc; done exit 0 epx_launch_mpi!/bin/sh Arguments " " " EUROPLEXUS : Parallel MPI run (OpenMPI)" " " if [ $ = 0 ] ; then "Options:" 22

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu Grid 101 Josh Hegie grid@unr.edu http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging

More information

TYBCA US05EBCA01- BASICS OF UNIX OPERATING SYSTEM QUESTION BANK MULTIPLE CHOICE QUESTIONS

TYBCA US05EBCA01- BASICS OF UNIX OPERATING SYSTEM QUESTION BANK MULTIPLE CHOICE QUESTIONS TYBCA US05EBCA01- BASICS OF UNIX OPERATING SYSTEM QUESTION BANK MULTIPLE CHOICE QUESTIONS UNIT 1 1. In UNIX resources are shared by all the users, so UNIX is System. A. multiuser C. portable B. featureless

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

OS9 UNIX help man chd cd del rm copy cp makdir mkdir OS9 is case sensitive!! attr chmod deldir rmdir pd pwd list more

OS9 UNIX help man chd cd del rm copy cp makdir mkdir OS9 is case sensitive!! attr chmod deldir rmdir pd pwd list more INFOS OS9 OS9 COMMAND WHAT IT DOES dir list files copy copy files list print file content attr shows and change access privileges ( in OS9 all users of the same group hav same privileges ) : from right

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina High Performance Computing Facility Specifications, Policies and Usage Supercomputer Project Bibliotheca Alexandrina Bibliotheca Alexandrina 1/16 Topics Specifications Overview Site Policies Intel Compilers

More information

Joint High Performance Computing Exchange (JHPCE) Cluster Overview

Joint High Performance Computing Exchange (JHPCE) Cluster Overview Joint High Performance Computing Exchange (JHPCE) Cluster Overview The JHPCE staff: Director: Technology Manager: Systems Engineer: Fernando J. Pineda Mark Miller Jiong Yang bithelp@lists.johnshopkins.edu

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

Beyond Windows: Using the Linux Servers and the Grid

Beyond Windows: Using the Linux Servers and the Grid Beyond Windows: Using the Linux Servers and the Grid Topics Linux Overview How to Login & Remote Access Passwords Staying Up-To-Date Network Drives Server List The Grid Useful Commands Linux Overview Linux

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35

HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35 HPCC USER S GUIDE Version 1.2 July 2012 IITS (Research Support) Singapore Management University IITS, Singapore Management University Page 1 of 35 Revision History Version 1.0 (27 June 2012): - Modified

More information

7. The Shell. 7. The Shell. Command Syntax and Job Control Classical UNIX Filters Shell Programming 119 / 303

7. The Shell. 7. The Shell. Command Syntax and Job Control Classical UNIX Filters Shell Programming 119 / 303 7. The Shell 7. The Shell Command Syntax and Job Control Classical UNIX Filters Shell Programming 119 / 303 7. The Shell Help Yourself UNIX man pages Read man pages: http://www.linuxmanpages.com or http://linux.die.net/man

More information

High Performance Computing

High Performance Computing High Performance Computing at Stellenbosch University Gerhard Venter Outline 1 Background 2 Clusters 3 SU History 4 SU Cluster 5 Using the Cluster 6 Examples What is High Performance Computing? Wikipedia

More information

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda High Performance Computing with Sun Grid Engine on the HPSCC cluster Fernando J. Pineda HPSCC High Performance Scientific Computing Center (HPSCC) " The Johns Hopkins Service Center in the Dept. of Biostatistics

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

Bourne Shell Programming in One Hour

Bourne Shell Programming in One Hour Bourne Shell Programming in One Hour Ben Pfaff 1 Aug 1999 1 Introduction Programming with the Bourne shell is similar to programming in a conventional language. If you ve ever written

More information

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew

More information

Unix Scripts and Job Scheduling

Unix Scripts and Job Scheduling Unix Scripts and Job Scheduling Michael B. Spring Department of Information Science and Telecommunications University of Pittsburgh spring@imap.pitt.edu http://www.sis.pitt.edu/~spring Overview Shell Scripts

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

New User Tutorial. OSU High Performance Computing Center

New User Tutorial. OSU High Performance Computing Center New User Tutorial OSU High Performance Computing Center TABLE OF CONTENTS Logging In... 3-5 Windows... 3-4 Linux... 4 Mac... 4-5 Changing Password... 5 Using Linux Commands... 6 File Systems... 7 File

More information

UNIX A Bit of History

UNIX A Bit of History UNIX A Bit of History http : //www.levenez.com/unix/ p.1/32 Login Login from each Linux/Solaris machine in 1D39 etc Remotely login using ssh from other UNIX machines: ssh L 1D39 10.cse.sc.edu p222 Remotely

More information

Installation. Installation centreon + nagios3 1 25 mai 2009 1.1 LISTE DES PRE-REQUIS. Nagios/centreon. 1.1.1 Paquets divers. 1.1.

Installation. Installation centreon + nagios3 1 25 mai 2009 1.1 LISTE DES PRE-REQUIS. Nagios/centreon. 1.1.1 Paquets divers. 1.1. Installation 1.1 LISTE DES PRE-REQUIS 1.1.1 Paquets divers tofrodos mailx lsb-release build-essential 1.1.2 Compilateurs : 1.1.3 Serveur Web et php5 apache2 php5 php5-mysql php-pear php5-ldap php5-snmp

More information

Part One: The Files. C MPI Torque Tutorial - Hello World. Introduction. Hello World! hello.tgz. The files, summary. Output Files, summary

Part One: The Files. C MPI Torque Tutorial - Hello World. Introduction. Hello World! hello.tgz. The files, summary. Output Files, summary C MPI Torque Tutorial - Hello World Introduction The example shown here demonstrates the use of the Torque Scheduler for the purpose of running a C/MPI program. Knowledge of C is assumed. Having read the

More information

Shell Programming. Introduction to Linux. Peter Ruprecht Research CU Boulder

Shell Programming. Introduction to Linux. Peter Ruprecht  Research CU Boulder Introduction to Linux Shell Programming Peter Ruprecht peter.ruprecht@colorado.edu www.rc.colorado.edu Downloadable Materials Slides available at https://github.com/researchcomputing/ RMACC_204_bashtutorial/slides.pdf

More information

Installing and running COMSOL on a Linux cluster

Installing and running COMSOL on a Linux cluster Installing and running COMSOL on a Linux cluster Introduction This quick guide explains how to install and operate COMSOL Multiphysics 5.0 on a Linux cluster. It is a complement to the COMSOL Installation

More information

Tutorial 0A Programming on the command line

Tutorial 0A Programming on the command line Tutorial 0A Programming on the command line Operating systems User Software Program 1 Program 2 Program n Operating System Hardware CPU Memory Disk Screen Keyboard Mouse 2 Operating systems Microsoft Apple

More information

GRID Computing: CAS Style

GRID Computing: CAS Style CS4CC3 Advanced Operating Systems Architectures Laboratory 7 GRID Computing: CAS Style campus trunk C.I.S. router "birkhoff" server The CAS Grid Computer 100BT ethernet node 1 "gigabyte" Ethernet switch

More information

Introduction to Sun Grid Engine (SGE)

Introduction to Sun Grid Engine (SGE) Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems

More information

2/23/2010. Using Parallel Computing Resources at Marquette

2/23/2010. Using Parallel Computing Resources at Marquette Using Parallel Computing Resources at Marquette 1 HPC Resources Local Resources HPCL Cluster hpcl.mscs.mu.edu PARIO Cluster pario.eng.mu.edu PERE Cluster pere.marquette.edu MU Grid Regional Resources Milwaukee

More information

5/20/2007. Touring Essential Programs Command Basics. Computer Hardware and Unix Architecture. Playing the shell game instructions to the shell

5/20/2007. Touring Essential Programs Command Basics. Computer Hardware and Unix Architecture. Playing the shell game instructions to the shell Computer Hardware and Unix Architecture Playing the shell game instructions to the shell Touring Essential Programs Command Basics Command structure Arguments, options, etc. Understand processes. Navigating

More information

UNIX - USEFUL COMMANDS

UNIX - USEFUL COMMANDS UNIX - USEFUL COMMANDS http://www.tutorialspoint.com/unix/unix-useful-commands.htm Copyright tutorialspoint.com This quick guide lists commands, including a syntax and brief description. For more detail,

More information

Agenda. Using HPC Wales 2

Agenda. Using HPC Wales 2 Using HPC Wales Agenda Infrastructure : An Overview of our Infrastructure Logging in : Command Line Interface and File Transfer Linux Basics : Commands and Text Editors Using Modules : Managing Software

More information

Debugging and Profiling Lab. Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma carlos@tacc.utexas.edu

Debugging and Profiling Lab. Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma carlos@tacc.utexas.edu Debugging and Profiling Lab Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma carlos@tacc.utexas.edu Setup Login to Ranger: - ssh -X username@ranger.tacc.utexas.edu Make sure you can export graphics

More information

High-Performance Reservoir Risk Assessment (Jacta Cluster)

High-Performance Reservoir Risk Assessment (Jacta Cluster) High-Performance Reservoir Risk Assessment (Jacta Cluster) SKUA-GOCAD 2013.1 Paradigm 2011.3 With Epos 4.1 Data Management Configuration Guide 2008 2013 Paradigm Ltd. or its affiliates and subsidiaries.

More information

Introduction to Linux Part 1. Anita Orendt, Martin Cuma Center for High Performance Computing 24 January 2017

Introduction to Linux Part 1. Anita Orendt, Martin Cuma Center for High Performance Computing 24 January 2017 Introduction to Linux Part 1 Anita Orendt, Martin Cuma Center for High Performance Computing 24 January 2017 ssh Login or Interactive Node kingspeak.chpc.utah.edu Batch queue system kp001 kp002. kpxxx

More information

INF-110. GPFS Installation

INF-110. GPFS Installation INF-110 GPFS Installation Overview Plan the installation Before installing any software, it is important to plan the GPFS installation by choosing the hardware, deciding which kind of disk connectivity

More information

Parallel Debugging with DDT

Parallel Debugging with DDT Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece

More information

Intro to UNIX and More Shell Scripting UNX110; 5 days, Instructor-led

Intro to UNIX and More Shell Scripting UNX110; 5 days, Instructor-led Intro to UNIX and More Shell Scripting UNX110; 5 days, Instructor-led Course Description This 5-day course provides a comprehensive introduction to the full range of UNIX user commands and utilities. Students

More information

Using Parallel Computing to Run Multiple Jobs

Using Parallel Computing to Run Multiple Jobs Beowulf Training Using Parallel Computing to Run Multiple Jobs Jeff Linderoth August 5, 2003 August 5, 2003 Beowulf Training Running Multiple Jobs Slide 1 Outline Introduction to Scheduling Software The

More information

Introduction to Using WestGrid. Masao Fujinaga Information Services and Technology University of Alberta

Introduction to Using WestGrid. Masao Fujinaga Information Services and Technology University of Alberta Introduction to Using WestGrid Masao Fujinaga Information Services and Technology University of Alberta Clusters batch system jobs are queued priority depends on allocation and past usage jasper 240 nodes

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

OS Structure. COMP755 Advanced OS

OS Structure. COMP755 Advanced OS OS Structure COMP755 Advanced OS "Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation

More information

Using the EaStCHEM Research Computing Facility

Using the EaStCHEM Research Computing Facility Using the EaStCHEM Research Computing Facility An Introduction to Linux, SLURM, CYGWIN and many other cryptic acronyms Herbert Früchtl Overview The EaStCHEM Research Computing Facility Connect to a Linux

More information

Athena Knowledge Base

Athena Knowledge Base Athena Knowledge Base The Athena Visual Studio Knowledge Base contains a number of tips, suggestions and how to s that have been recommended by the users of the software. We will continue to enhance this

More information

Online Backup Linux Client User Manual

Online Backup Linux Client User Manual Online Backup Linux Client User Manual Software version 4.0.x For Linux distributions August 2011 Version 1.0 Disclaimer This document is compiled with the greatest possible care. However, errors might

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

Introduction to Linux

Introduction to Linux Introduction to Linux scitas.epfl.ch September 12, 2016 http://go.epfl.ch/qw History of Linux Minix Educational Unix-like OS Developed by Andrew S. Tanenbaum 1 / 39 History of Linux Minix not free Educational

More information

IUCLID 5 Guidance and support. Installation Guide Distributed Version. Linux - Apache Tomcat - PostgreSQL

IUCLID 5 Guidance and support. Installation Guide Distributed Version. Linux - Apache Tomcat - PostgreSQL IUCLID 5 Guidance and support Installation Guide Distributed Version Linux - Apache Tomcat - PostgreSQL June 2009 Legal Notice Neither the European Chemicals Agency nor any person acting on behalf of the

More information

Introduction to Operating Systems

Introduction to Operating Systems Introduction to Operating Systems It is important that you familiarize yourself with Windows and Linux in preparation for this course. The exercises in this book assume a basic knowledge of both of these

More information

Anti-Scope: This is not a class on device driver, kernel or JTAG debugging. Similar interface & technique, but a completely different implementation.

Anti-Scope: This is not a class on device driver, kernel or JTAG debugging. Similar interface & technique, but a completely different implementation. October 2013 Introduction Concept Tools introduction LTIB Set Up Configuring Eclipse for Cross Compiling with i.mx Building Project from Eclipse Configuring Eclipse Debugger Conclusion 2 Scope: Provide

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

Abel for beginners. Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012

Abel for beginners. Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012 Abel for beginners Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012 Topics The Research Computing Services group Abel Getting access to Abel Windows and Unix Logging

More information

APPLICATION NOTE. How to build pylon applications for ARM

APPLICATION NOTE. How to build pylon applications for ARM APPLICATION NOTE Version: 01 Language: 000 (English) Release Date: 31 January 2014 Application Note Table of Contents 1 Introduction... 2 2 Steps... 2 1 Introduction This document explains how pylon applications

More information

RecoveryVault Express Client User Manual

RecoveryVault Express Client User Manual For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by

More information

Quick Tutorial for Portable Batch System (PBS)

Quick Tutorial for Portable Batch System (PBS) Quick Tutorial for Portable Batch System (PBS) The Portable Batch System (PBS) system is designed to manage the distribution of batch jobs and interactive sessions across the available nodes in the cluster.

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

Introduction to Linux/Unix Commands and vi Editor

Introduction to Linux/Unix Commands and vi Editor Introduction to Linux/Unix Commands and vi Editor Alan L. Scheinine IT Consultant HPC @ LSU October 22, 2008 Outline The basic Unix/Linux commands. The vi editor. More documentation at http://www.cct.lsu.edu/~scheinin/

More information

Shells and Shell Scripts

Shells and Shell Scripts Shells and Shell Scripts COMP 444/5201 Revision 1.3 January 25, 2005 1 Content Shells and Shell Scripts tcsh, enhanced C-Shell bash, Bourne-Again Shell 2 Shell Commands Shell commands are interpreted directly

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

CS 261 Recitation 1 Compiling C on UNIX

CS 261 Recitation 1 Compiling C on UNIX Oregon State University School of Electrical Engineering and Computer Science CS 261 Recitation 1 Compiling C on UNIX Fall 2013 Outline Secure Shell Basic UNIX commands Editing text The GNU Compiler Collection

More information

Platform LSF HPC EE ANSYS 12.1 HowTo

Platform LSF HPC EE ANSYS 12.1 HowTo Platform LSF HPC EE 2.0.1 ANSYS 12.1 HowTo Robert Stober Solutions Architect August 2010 Contents Overview... 3 Run Stand-Alone ANSYS Jobs... 4 Run a sequential job... 4 Run an SMP job... 4 Run a DMP job...

More information

Outline. Log In & Log Out. System Information. CS4233 Network Programming Introduction to UNIX. UNIX commands Editors

Outline. Log In & Log Out. System Information. CS4233 Network Programming Introduction to UNIX. UNIX commands Editors Outline CS4233 Network Programming Introduction to UNIX Chen-Lung Chan Department of Computer Science National Tsing Hua University UNIX Editors joe vi Shell script Variables Operators Logic structures

More information

Online Backup Client User Manual Linux

Online Backup Client User Manual Linux Online Backup Client User Manual Linux 1. Product Information Product: Online Backup Client for Linux Version: 4.1.7 1.1 System Requirements Operating System Linux (RedHat, SuSE, Debian and Debian based

More information

1. Product Information

1. Product Information ORIXCLOUD BACKUP CLIENT USER MANUAL LINUX 1. Product Information Product: Orixcloud Backup Client for Linux Version: 4.1.7 1.1 System Requirements Linux (RedHat, SuSE, Debian and Debian based systems such

More information

SLURM Workload Manager

SLURM Workload Manager SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux

More information

Configuring an OpenNMS Stand-by Server

Configuring an OpenNMS Stand-by Server WHITE PAPER Conguring an OpenNMS Stand-by Server Version 1.2 The OpenNMS Group, Inc. 220 Chatham Business Drive Pittsboro, NC 27312 T +1 (919) 533-0160 F +1 (503) 961-7746 david@opennms.com URL: http://blogs.opennms.org/david

More information

Online Backup Client User Manual

Online Backup Client User Manual For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by

More information

Distributed convex Belief Propagation Amazon EC2 Tutorial

Distributed convex Belief Propagation Amazon EC2 Tutorial 6/8/2011 Distributed convex Belief Propagation Amazon EC2 Tutorial Alexander G. Schwing, Tamir Hazan, Marc Pollefeys and Raquel Urtasun Distributed convex Belief Propagation Amazon EC2 Tutorial Introduction

More information

Building Software Systems. Multi-module C Programs. Example Multi-module C Program. Example Multi-module C Program

Building Software Systems. Multi-module C Programs. Example Multi-module C Program. Example Multi-module C Program Building Software Systems Multi-module C Programs Software systems need to be built / re-built during the development phase if distributed in source code form (change,compile,test,repeat) (assists portability)

More information

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Miami University RedHawk Cluster Working with batch jobs on the Cluster Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to Linux Andrew Gustafson What is Linux? Linux is an open source Unix-like operating system. Unix was developed at AT&T Bell Labs in the 1970 s. Linux was

More information

What is new in Switch 12

What is new in Switch 12 What is new in Switch 12 New features and functionality: Remote Designer From this version onwards, you are no longer obliged to use the Switch Designer on your Switch Server. Now that we implemented the

More information

Introduction to Parallel Programming with MPI

Introduction to Parallel Programming with MPI Introduction to Parallel Programming with MPI PICASso Tutorial October 22-23, 2007 Stéphane Ethier (ethier@pppl.gov) Computational Plasma Physics Group Princeton Plasma Physics Lab Why Parallel Computing?

More information

CS 103 Lab Linux and Virtual Machines

CS 103 Lab Linux and Virtual Machines 1 Introduction In this lab you will login to your Linux VM and write your first C/C++ program, compile it, and then execute it. 2 What you will learn In this lab you will learn the basic commands and navigation

More information

10 STEPS TO YOUR FIRST QNX PROGRAM. QUICKSTART GUIDE Second Edition

10 STEPS TO YOUR FIRST QNX PROGRAM. QUICKSTART GUIDE Second Edition 10 STEPS TO YOUR FIRST QNX PROGRAM QUICKSTART GUIDE Second Edition QNX QUICKSTART GUIDE A guide to help you install and configure the QNX Momentics tools and the QNX Neutrino operating system, so you can

More information

Thirty Useful Unix Commands

Thirty Useful Unix Commands Leaflet U5 Thirty Useful Unix Commands Last revised April 1997 This leaflet contains basic information on thirty of the most frequently used Unix Commands. It is intended for Unix beginners who need a

More information

How to Backup XenServer VM with VirtualIQ

How to Backup XenServer VM with VirtualIQ How to Backup XenServer VM with VirtualIQ 1. Using Live Backup of VM option: Live Backup: This option can be used, if user does not want to power off the VM during the backup operation. This approach takes

More information

Notes on the SNOW/Rmpi R packages with OpenMPI and Sun Grid Engine

Notes on the SNOW/Rmpi R packages with OpenMPI and Sun Grid Engine Notes on the SNOW/Rmpi R packages with OpenMPI and Sun Grid Engine Last updated: 6/2/2008 4:43PM EDT We informally discuss the basic set up of the R Rmpi and SNOW packages with OpenMPI and the Sun Grid

More information

SFTP SHELL SCRIPT USER GUIDE

SFTP SHELL SCRIPT USER GUIDE SFTP SHELL SCRIPT USER GUIDE FCA US INFORMATION & COMMUNICATION TECHNOLOGY MANAGEMENT Overview The EBMX SFTP shell scripts provide a parameter driven workflow to place les on the EBMX servers and queue

More information

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Pierre-Yves Taunay Research Computing and Cyberinfrastructure 224A Computer Building The Pennsylvania State University University

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

Working with HPC Apps

Working with HPC Apps Working with HPC Apps Abhinav Thota Scientific Applications and Performance Tuning Research Technologies Indiana University July 09, 2014 What is this class about? Working with applications on HPC machines

More information

WES 9.2 DRIVE CONFIGURATION WORKSHEET

WES 9.2 DRIVE CONFIGURATION WORKSHEET WES 9.2 DRIVE CONFIGURATION WORKSHEET This packet will provide you with a paper medium external to your WES box to write down the device names, partitions, and mount points within your machine. You may

More information

Linux introduction. Dinesh Gupta ICGEB, India 1/27/2010 5:43 PM

Linux introduction. Dinesh Gupta ICGEB, India 1/27/2010 5:43 PM Linux introduction Dinesh Gupta ICGEB, India Linux The Linux operating system (OS) was first coded by a Finnish computer programmer called Linus Benedict Torvalds in 1991, when he was just 21! He had got

More information

Backup of ESXi Virtual Machines using Affa

Backup of ESXi Virtual Machines using Affa Backup of ESXi Virtual Machines using Affa From SME Server Skill level: Advanced The instructions on this page may require deviations from procedure, a good understanding of linux and SME is recommended.

More information

Introduction to Linux. Francisco Salavert Torres February 29th, 2016

Introduction to Linux. Francisco Salavert Torres February 29th, 2016 Introduction to Linux Francisco Salavert Torres February 29th, 2016 1 What is GNU/Linux? GNU/Linux to simplify Linux, is a free Operating System (OS). By Operating System, we mean the suite of programs

More information

CIS18A: Introduction to Linux/Unix CLASSROOM ATC 204

CIS18A: Introduction to Linux/Unix CLASSROOM ATC 204 College academic Calendar: Winter 2015 http://deanza.fhda.edu/calendar/winterdates.html Instructor Information CIS18A: Introduction to Linux/Unix CLASSROOM ATC 204 WINTER 2015 : Section INFO: 00444 CIS

More information

Code::Block manual. for CS101x course. Department of Computer Science and Engineering Indian Institute of Technology - Bombay Mumbai - 400076.

Code::Block manual. for CS101x course. Department of Computer Science and Engineering Indian Institute of Technology - Bombay Mumbai - 400076. Code::Block manual for CS101x course Department of Computer Science and Engineering Indian Institute of Technology - Bombay Mumbai - 400076. April 9, 2014 Contents 1 Introduction 1 1.1 Code::Blocks...........................................

More information

Introduction to the SGE/OGS batch-queuing system

Introduction to the SGE/OGS batch-queuing system Grid Computing Competence Center Introduction to the SGE/OGS batch-queuing system Riccardo Murri Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Oct. 6, 2011 The basic

More information

Install guide for Websphere 7.0

Install guide for Websphere 7.0 DOCUMENTATION Install guide for Websphere 7.0 Jahia EE v6.6.1.0 Jahia s next-generation, open source CMS stems from a widely acknowledged vision of enterprise application convergence web, document, search,

More information

Bourne Shell Programming

Bourne Shell Programming Borne Shell Background Early Unix shell that was written by Steve Bourne of AT&T Bell Lab. Basic shell provided with many commercial versions of UNIX Many system shell scripts are written to run under

More information

Fundamentals of Linux

Fundamentals of Linux To register or for more information call our office (208) 898-9036 or email register@leapfoxlearning.com Fundamentals of Linux Duration: Traditional Instructor Led Learning -4.00 Day(s) Audience: End-users

More information

CPSC 226 Lab Nine Fall 2015

CPSC 226 Lab Nine Fall 2015 CPSC 226 Lab Nine Fall 2015 Directions. Our overall lab goal is to learn how to use BBB/Debian as a typical Linux/ARM embedded environment, program in a traditional Linux C programming environment, and

More information

Overview of presentation

Overview of presentation Overview of presentation What is C3SE & SNIC, a compute cluster, Beda Differences between a cluster and a normal workstation Conceptional overview of a normal work flow Concrete starting points Accessing

More information