Parallel run with Fluent on a HPC cluster

Fluent

Fluent is used for computational fluid dynamics (CFD).

Fluent Journal Script

A Fluent journal script (e.g. "journalfile.jou") might look like this:
file/read-case-data aircraft.cas
it 1000
wd "aircraft.dat"

Submitting a Batch Fluent Job to Slurm

With the correct journal file set up, the final step is to submit this to the Slurm queuing system.
#!/bin/sh
#SBATCH -n 1            # 1 core
#SBATCH -t 1-03:00:00   # 1 day and 3 hours
#SBATCH -p compute      # parition name
#SBATCH -J fluent_batch # sensible name for the job

# load the relevant module files. NB: if unsure about
# what you need, please contact ops
source /etc/profile.d/modules.sh
module load apps fluent

# run slurm in batch on the allocated node(s)
# the '-t4' specifies 4 cores (academic licensing)
fluent 2d -g -t4 -i journalfile.jou > outputfile.out
The fluent options in the above Slurm script are:
2dthe fluent version used
-gno graphical environment
-i journalfile.jouread the journal file provided

Running fluent in parallel

Example submission script:
#!/bin/sh
#SBATCH -n 16           # cores
#SBATCH -t 1-00:00:00     # 1 day walltime
#SBATCH -p compute      # partition name
#SBATCH -J paraFluent    # sensible name for the job

# load the relevant module files
source /etc/profile.d/modules.sh
module load apps fluent default-intel-impi

FLUENTNODES="$(scontrol show hostnames)"
echo $FLUENTNODES

fluent 3ddp -t 16 -mpi=intel -ssh -pinfiniband -cnf=$FLUENTNODES -g -i test.jou > output.`date '+%F_%H-%M-%S'`
Notes on the example script:
  • In order for MPI to work the default-intel-impi module or similar are required and must be loaded in the batch script
  • -t 16 specifies 16 cores
  • -mpi=intel ensures the correct mpi is used
  • -ssh forces fluent to use ssh instead of rsh which isn't available
  • -cnf=$FLUENTNODES - fluent requires a hosts list, which can be acquired from the slurm environment.

Meshing tool

The meshing tool in Fluent is called ICEM. You can start ICEM with the command icemcfd.

Submit a Fluent job

You can run a Fluent job interactively on ALICE using the qsub command. However it is more sensible to run Fluent as a non-interactive job.
To run Fluent as a non-interactive job you need to:
  • Store instructions including locations of input and output files in a Fluent journal file. A journal file is a simple text file containing Fluent Text User Interface (TUI) commands.
  • Use the -i option followed by the name of the journal file to make Fluent execute the commands in a journal file.
  • Use the -g option to disable the graphical user interface (GUI) since this isn't required.
  • Use a submission script to submit the Fluent job to the job scheduler. The scheduler uses this to execute the Fluent command.

Example

In the example below the case and data files simulation.cas and simulation.dat will be processed over 10000 iterations by Fluent by reading instructions from the simulation.jou file.
Example journal file simulation.jou:
; Read case and data files
file read-case-data simulation.cas
; Do 10000 iterations
it 10000
; Write output back to data file
file write-data simulation.dat
; 'yes' here allows the existing .dat file to be overwritten
yes
Example submission script submit.sh:
This example requests one hour of walltime, uses 16 processor cores and 120GB of memory (the job requests exclusive use of an entire node, then runs 16 parallel threads as this is the limit imposed by the fluent license).
#!/bin/bash
#
#PBS -N Fluent
#PBS -m bae
#PBS -M username@leicester.ac.uk
#PBS -l walltime=01:00:00
#PBS -l nodes=1:ppn=28
#PBS -l vmem=60g
#PBS -q parallel

# Load the Fluent module
module load fluent/18

# nprocs is the number of cores to use
# note that the number of processors is limited to 16 by the fluent  
#     license, though the node has 28 available 
nprocs=16

cd $PBS_O_WORKDIR

# Start fluent using the specified journal file
fluent 2ddp -ssh -g -pshmem -mpi=openmpi -t${nprocs} -i simulation.jou
Submit the Fluent submission job to the queue:
> qsub submit.sh

Create your own submission script

You will usually only need to make small changes to the example submission script:
  • Walltime
  • Memory limits
  • Fluent grid (this is 2ddp in the example above)

Running Fluent/Ansys Jobs

Interactive GUI Jobs

Ansys/Fluent is a full GUI application, so refer to HPC Visual Access to get quicker response. 
Serial Job:
Reserve a compute node to run Ansys/Fluent interactively by typing:
 qsub -I -X

You will be assigned one of the compute nodes. To use Ansys, you must modify your environment in the compute node using the following shell command:
module load ansys
Copy the file flow.msh from /usr/local/doc/ANSYS
cp /usr/local/doc/ANSYS/flow.msh ./

Open the workbench by typing:
fluent

(Note: The menus and buttons must have been changed in newer versions)
  • Choose Dimension 2D and Processing Options as Serial; press OK 
  • In the GUI page, click on File Menu ->Read->Mesh… and choose flow.msh from the Files list and Ok. Under General, click on the Display. You will see the pipe diagram in the display window
  • You may want to set different configuration, but in this example, we have chosen the default settings
  • Click on Solve->Initialization and click on Initialize button (again you may want to change initial values)
  • Click on Solve->Run Calculation and then click on Calculate button
  • You will see the calculation complete dialog box; click OK
  • Click on File->write->Case… and name the file flow-serial.cas and click OK
Parallel Job:
Depending on the number of processors (n) that you want to request, type:
qsub –l nodes=1:ppn=2 –I –X
(Note: the example uses n=2)

Batch Jobs

Serial Job:
Copy the content in the file, flowinput-serial.in. If you have not created a .cas file using the interactive job, copy the flow-serial.cas file from /usr/local/doc/ANSYS.
; Read case file
rc flow-serial.cas
; Initialize the solution
/solve/initialize/initialize-flow
; Calculate 50 iterations
it 50
; Write data file
wd example50.dat
; Calculate another 50 iterations
it 50
; Write another data file
wd example100.dat
; Exit FLUENT
exit
yes
(Note: rc => /read/case;  wd->/file/write-data; it=>/solve/iterate are the aliases for ANSYS commands; make sure the aliases exist else use the complete command; /solve/initialize/initialize-flow; ; is a comment)
Copy the content in fluent-serial.pbs file
#PBS -l walltime=2:10:00
#PBS -l nodes=1:ppn=1
#PBS -N test-ansys-serial
#PBS -j oe

module load ansys
cd $PBS_O_WORKDIR
NPROCS=`wc -l < $PBS_NODEFILE`
cp flowinput-serial.in flow-serial.cas $PFSDIR
cd $PFSDIR

#Run fluent
fluent 2ddp -g < flowinput-serial.in
cp * $PBS_O_WORKDIR
cd $PBS_O_WORKDIR

Submit the serial job
qsub fluent-serial.pbs

Parallel job: 
Follow the same procedures as in Serial Job, except choosing Processing Option as Parallel and Number of Processor as n (e.g 2). Also name the journal file as flow-parallel.cas to separate it from serial one. Copy the content of the file, flowinput-parallel.in (same as the serial version).

; Read case filerc flow-parallel.cas; Initialize the solution/solve/initialize/initialize-flow; Calculate 50 iterationsit 50; Write data filewd example50.dat; Calculate another 50 iterationsit 50; Write another data filewd example100.dat; Exit FLUENTexityes
Copy the content in fluent-parallel.pbs file
#PBS -l walltime=2:10:00
#PBS -l nodes=1:ppn=2
#PBS -N test-ansys-parallel
#PBS -j oe

module load ansys
cd $PBS_O_WORKDIR
NPROCS=`wc -l < $PBS_NODEFILE`
cp flowinput-parallel.in flow-parallel.cas $PFSDIR
cd $PFSDIR

#Run fluent
fluent 2ddp -t${NPROCS} -p -cnf=$PBS_NODEFILE -g < flowinput-parallel.in
cp * $PBS_O_WORKDIR
cd $PBS_O_WORKDIR
(Note:  In this script, #PBS -l nodes=1:ppn=2 indicates that you have requested 2 processors; the same number used while creating .cas file interactively)
Submit Parallel job
qsub fluent-parallel.pbs

Performance

Some investigation has shown that improved FLUENT performance can be gained for parallel jobs, at least in some cases, by minimizing the number of nodes used through the use of multicore processors. Prior to extensive use of FLUENT for a large number of similar long-running jobs, some experimentation should be carried out to determine the best type of node(s) to use.

Comments

  1. Great work.
    Can anyone help me find TUI command for saving contour plots while running parallel fluent job in HPC.
    Thank You in advance

    ReplyDelete

Post a Comment

Popular posts from this blog

yPlus utility in OpenFOAM

wallShearStress in OpenFOAM