Skip to content

Running Fluidity in parallel

Alexandros Avdis edited this page Aug 4, 2014 · 15 revisions

Introduction

Fluidity is parallelised using MPI and standard domain decomposition techniques: The domain (mesh) is split into partitions and each processor (core) is assigned one partition. The core will solve the problem on its partition. Clearly, during the simulation processors need to exchange information, that is handled by Fluidity in a manner transparent to the user.

The steps necessary to run Fluidity in parallel are as follows:

  1. Decompose the mesh. As mentioned above the mesh must be partitioned. This is done via the scripts fldecomp or flredecomp, which take the mesh file and the number of partitions as input and return the decomposed mesh. See section Decomposing the Mesh below
  2. Choose appropriate matrix solvers, section Decomposing the Mesh below gives some advice
  3. Run your simulation. Running in parallel is straight forward on a desktop or laptop but additional scripts may be necessary when using specialised supercomputers. See section [[]] below

Reviewer's notes:

  • Expand, also have the first timer in mind
  • Update the whole page, some material appears to be obsolete.

Decomposing the Mesh

To decompose the triangle mesh you must use fldecomp or flredecomp. Both tools are part of fltools, please look at fluidity tools if you have not already built and/or installed them.

The tools are used as follows:

fldecomp -m triangle -n [PARTS] [BASENAME]

where BASENAME is the triangle mesh base name (excluding extensions). "-m triangle" instruct fldecomp to perform a triangle-to-triangle decomposition. This will create PARTS partition triangle meshes together with PARTS .halo files.

Parallel Specific Options

In the options file, select "triangle" under /geometry/mesh/from_file/format for the from_file mesh. For the mesh filename, enter the triangle mesh base name excluding all file and process number extensions.

Also:

  • Remember to select parallel compatible preconditioners in prognostic field solver options. eisenstat is not suitable for parallel simulations.

Launching Fluidity

To launch a new options parallel simulation, add "[OPTIONS FILE]" to the Fluidity command line, e.g.:

mpiexec fluidity -v2 -l [OPTIONS FILE]

Example 1 - Straight run

gormo@rex:~$ cat host_file
rex
rex
rex
rex
mpirun -np 4 --hostfile host_file $PWD/dfluidity tank.flml

Example 2 - running inside gdb

xhost +rex
gormo@rex:~$ echo $DISPLAY
:0.0
mpirun -np 4 -x DISPLAY=:0.0 xterm -e gdb $PWD/dfluidity-debug

To run in a batch job on cx1, using something like the following PBS script:

#!/bin/bash
#Job name
#PBS -N backward_step
# Time required in hh:mm:ss
#PBS -l walltime=48:00:00
# Resource requirements
# Always try to specify exactly what we need and the PBS scheduler
# will make sure to get your job running as quick as possible. If
# you ask for too much you could be waiting a while for sufficient
# resources to become available. Experiment!
#PBS -l select=2:ncpus=4
# Files to contain standard output and standard error
##PBS -o stdout
##PBS -e stderr
PROJECT=backward_facing_step_3d.flml
echo Working directory is $PBS_O_WORKDIR 
cd $PBS_O_WORKDIR
 rm -f stdout* stderr* core*
module load intel-suite
module load mpi
module load vtk
module load cgns
module load petsc/2.3.3-p1-amcg
module load python/2.4-fake
# This will put the location of the temporary directory into a temporary file
# in case you need to check it's progress 
mpiexec $PWD/fluidity -v2 -l $PWD/$PROJECT 

This will run on 8 processors (2 * 4 from the line PBS -l select=2:ncpus=4).

Visualising Data

The output from a parallel run is a bunch of .vtu and .pvtu files. A .vtu file is output for each processor and each timestep, e.g. backward_facing_step_3d_191_0.vtu is the .vtu file for step 191 from processor 0. A .pvtu file is generated for each timestep, e.g. backward_facing_step_3d_191.pvtu is for timestep 191.

The best way to view the output is using paraview. Simply open the .ptvu file.

On cx1, you will need to load the paraview module: module load paraview/3.4.0

Clone this wiki locally