Large Memory Jobs on BlueBEAR¶
Each of BlueBEAR’s Sapphire Rapids nodes (introduced summer 2024) and Ice Lake
nodes (introduced autumn 2021) have approximately
490GB memory (RAM) available for running jobs and are therefore suitable for
tasks that require a large amounts of memory. The simplest way to request
a large amount of memory is to include the following #SBATCH
headers in
your job script:
#SBATCH --nodes=1
#SBATCH --mem=244G
Efficient Use¶
To ensure that BlueBEAR’s resources are used efficiently we
prefer that resource requests work on the basis of quarter-node
slices, which means that other jobs can run on the same node,
concurrently with a larger-memory job. Please therefore request --mem
in multiples of 122G.
Another solution is to use the --mem-per-cpu
Slurm option, and then
scale using --ntasks
. For example:
#SBATCH --nodes=1
#SBATCH --mem-per-cpu=6750M
#SBATCH --ntasks=54
In the above example we’re specifying 54 cores (or three quarters of the available 72 cores on an Ice Lake node) to get 365580M memory. Another user (or another of your jobs) could then ask for 18 cores at 6750M each, and get the remaining 121500M.
BlueBEAR Large Memory Service (a.k.a. bblargemem
QOS)¶
Due to the changes in BlueBEAR’s topology as outlined above, the
bblargemem
QOS has been retired as the higher-memory Ice Lake nodes are
available via the standard QOSes.