xperiences of numerical simulations on a P cluster xperiences of numerical simulations on a P cluster ecember
xperiences of numerical simulations on a P cluster Introduction eowulf concept Using commodity off the shelf hardware to build a massively parallel computer Nodes run in a dedicated network only one node master node is connected to the LN igure Nodes run open source software Linux rees OS PVM MPI irst eowulf node X in at the enter of xcellence in Space ata and Information Sciences SIS
xperiences of numerical simulations on a P cluster eowulf network configuration LN HIHIHIHIHIHIHIHIHIHIHIHIHIHIHIHIHIHIHIHIH JIJIJIJIJIJIJIJIJIJIJIJIJIJIJIJIJIJIJIJIJ KIKIKIKIKIKIKIKIKIKIKIKIKIKIKIKIKIKIKIKIK LILILILILILILILILILILILILILILILILILILILIL MIMIMIMIMIMIMIMIMIMIMIMIMIMIMIMIMIMIMIMIM NININININININININININININININININININININ OIOIOIOIOIOIOIOIOIOIOIOIOIOIOIOIOIOIOIOIO PIPIPIPIPIPIPIPIPIPIPIPIPIPIPIPIPIPIPIPIP QIQIQIQIQIQIQIQIQIQIQIQIQIQIQIQIQIQIQIQIQ RIRIRIRIRIRIRIRIRIRIRIRIRIRIRIRIRIRIRIRIR SISISISISISISISISISISISISISISISISISISISIS TITITITITITITITITITITITITITITITITITITITIT UIUIUIUIUIUIUIUIUIUIUIUIUIUIUIUIUIUIUIUIU VIVIVIVIVIVIVIVIVIVIVIVIVIVIVIVIVIVIVIVIV WIWIWIWIWIWIWIWIWIWIWIWIWIWIWIWIWIWIWIWIW XIXIXIXIXIXIXIXIXIXIXIXIXIXIXIXIXIXIXIXIX YIYIYIYIYIYIYIYIYIYIYIYIYIYIYIYIYIYIYIYIY ZIZIZIZIZIZIZIZIZIZIZIZIZIZIZIZIZIZIZIZIZ [I[I[I[I[I[I[I[I[I[I[I[I[I[I[I[I[I[I[I[I[ \I\I\I\I\I\I\I\I\I\I\I\I\I\I\I\I\I\I\I\I\ ]I]I]I]I]I]I]I]I]I]I]I]I]I]I]I]I]I]I]I]I] ^I^I^I^I^I^I^I^I^I^I^I^I^I^I^I^I^I^I^I^I^ _I_I_I_I_I_I_I_I_I_I_I_I_I_I_I_I_I_I_I_I_ ÌÌÌÌÌÌÌÌÌÌÌÌÌÌÌÌÌÌÌÌ` Hub thernet
xperiences of numerical simulations on a P cluster Hardware x ujitsusiemens Primergy Servers with one Hz Pentium processor node Memory master node MHz SRM slave nodes MHz SRM Storage SSI disk in the master node Network Intel TXPIX igabit copper ethernet NIs Switch HP Procurve L with two T L modules igabit ethernet
xperiences of numerical simulations on a P cluster Software OS Linux pre kernel OpenS a Redhat derivative Slave nodes are diskless startup is done via PX and OS is on NS easy to maintain and upgrade MPIH Message Passing Interface MPI implementation callable from ortran and Job scheduling NU Queue luster monitoring anglia luster Toolkit cc c and bsoft compilers
xperiences of numerical simulations on a P cluster anglia luster Toolkit luster view
xperiences of numerical simulations on a P cluster Mathematical software tlas LS Lapack sequential linear algebra libraries ScaLPK and PLS Parallel versions subsets of LS and Lapack ense and band matrices supported Petsc P solver includes basic matrix algebra operations and linear and nonlinear equation solvers supports both sparse and dense matrices eatures also interfaces to several other packages SuperLU Matlab oth Petsc and Scalapack use MPI library for communications
xperiences of numerical simulations on a P cluster Writing parallel code More or less complicated than sequential code depending on the used library MPI write everything from scratch Highlevel libraries Petsc ScaLPK libraries take care of the data distribution and communication Tradeoff between development time and execution time
xperiences of numerical simulations on a P cluster ode example Matlab versus Petsc called from for iint TiiMTiidtT_aMduiiˆ TiiU\L\Tii end for i int i { MatMult pu[i] tmpn ui VecPointwiseMulttmpN tmpn tmpn uiˆ MatMultM tmpn tmpn Muiˆ VecXPYdt dt T_a tmpn dtt_a Muiˆ MatMultddM pt[i] tmpn pt[i] Ti MT dtt_a Muiˆ SLSSolveslesL pt[i] tmpn int its SLSSolveslesU tmpn pt[i] int its }
xperiences of numerical simulations on a P cluster Performance epends heavily on application Imbalances in data distribution among the nodes result in surprising calculation times Parallel versions of three different numerical simulations bioheat transfer equation using M aerosol size distribution estimation using SIRfilter and ultrasound wavefield simulation using ultra weak variational formulation uwvf
xperiences of numerical simulations on a P cluster ioheat equation solver using M computation domain and thermal dose y m Ω I Ω II Ω III Ω IV x m
xperiences of numerical simulations on a P cluster ioheat equation solver using M calculation times N N N N t s t s of processors of nodes x
xperiences of numerical simulations on a P cluster erosol size estimation SIRfilter calculation times t s t s of processors of particles in SIR x
xperiences of numerical simulations on a P cluster Helmholtz UWV solver domain consisting of tetrahedra fkhz t s UWV p of processors
xperiences of numerical simulations on a P cluster onclusions eowulf clusters are costeffective alternatives to traditional parallel computers for memorybound problems Network latency is TH problem for matrix calculations Special NIs Myrinet SI latencies compare to b ethernet latency cost typically more than per node heaper option for low bandwidth network could be I b irewire for small or middlesize clusters or easily parallelizing problems clusters of nondedicated desktop computers can be used