Application Guide: ANSYS¶
This page contains information on how to run a variety of ANSYS batch jobs on BlueBEAR.
See the ANSYS page on the BEAR Apps website for information on available versions.
Parallel Wrapper script¶
We provide a helper script to generate the node-file for parallel ANSYS jobs:
module load slurm-helpers
NODEFILE=$(make_nodelist)
The resulting ${NODEFILE}
environment variable is then passed as follows:
- CFX:
-par-dist "${NODEFILE}"
- Fluent:
-cnf=${NODEFILE}
Example sbatch
scripts¶
Please use the tabs below to see example batch scripts for ANSYS commands. For further general information on running BlueBEAR jobs, please see Jobs on BlueBEAR.
This example runs cfx5solve
on the ANSYS-provided example file Benchmark.def
. Please adjust the paths accordingly
to use your own input files.
#!/bin/bash
#SBATCH --ntasks=10
#SBATCH --ntasks-per-node=5 # this forces the job to distribute evenly across two nodes
#SBATCH --account=_ACCOUNT_NAME_
#SBATCH --qos=bbdefault
#SBATCH --time=10
set -e
module purge
module load bluebear
module load slurm-helpers
module load bear-apps/2022a
module load ANSYS/2023R1
# Copy the example file into the working directory
cp "${EBROOTANSYS}/v2*/CFX/examples/Benchmark.def" .
# Assign the list of the job's nodes to an environment variable
NODELIST=$(make_nodelist)
cfx5solve -def Benchmark.def -parallel -par-dist "${NODELIST}" -start-method "Open MPI Distributed Parallel"
cfx5solve
options explained
-def
: Use specified file as the Solver Input File-parallel
: Run the ANSYS CFX Solver in parallel mode-par-dist
: Provide a comma-separated list of a job’s nodes, e.g. the one created by themake_nodelist
script.-start-method
: Use the named start method (wrapper) to start the ANSYS CFX Solver. See the${EBROOTANSYS}/v2*/CFX/etc/start-methods.ccl
file for the possible methods.
Tip
For further information on the cfx5solve
command, please run:
cfx5solve -help
N.B. Requires an input journal file and any related case or mesh files to be provided.
In this example we are using aircraft_wing_2m.jou
but please adjust the paths
accordingly to use your own input files.
#!/bin/bash
#SBATCH --ntasks=10
#SBATCH --ntasks-per-node=5 # this forces the job to distribute evenly across two nodes
#SBATCH --account=_ACCOUNT_NAME_
#SBATCH --qos=bbdefault
#SBATCH --time=10
set -e
module purge
module load bluebear
module load slurm-helpers
module load bear-apps/2022a
module load ANSYS/2023R1
# Copy example input files
cp "${BB_APPS_DATA}/ANSYS/examples/fluent/"* .
JOURNAL_FILE="./aircraft_wing_2m.jou"
# Create the nodefile
NODEFILE=$(make_nodefile)
fluent 3ddp -t${SLURM_NTASKS} -mpi=intel -pib.ofed -cnf=${NODEFILE} -g -i "${JOURNAL_FILE}"
fluent
options explained
-t
: specify number of processors. Use the Slurm-provided${SLURM_NTASKS}
environment variable here.-mpi
: specify MPI implementation-p
: specify the MPI interconnect-cnf
: specify the hosts file-g
: run without GUI or graphics-i
: read the specified journal file
Tips
-
Understanding the
fluent
modes:- 2d: two-dimensional
- 2ddp: two-dimensional double-precision
- 3d: three-dimensional
- 3ddp: three-dimensional double-precision
-
For further information on the
fluent
command, please run:fluent -help
-
There is some excellent documentation on writing Fluent journal files on the following University of Sheffield page: https://docs.hpc.shef.ac.uk/en/latest/referenceinfo/ANSYS/fluent/writing-fluent-journal-files.html