Self-installing C/C++/Fortran Software for BlueBEAR¶
Because of the nature of the heterogeneous architecture of BlueBEAR you will need to consider how you compile and run packages using compiled languages. We provide a number of tools to help you with your software development needs. Generally, compiling codes on BlueBEAR is not as straightforward as on other HPC machines or your own machine.
Accessing Compilers and Build Tools¶
We provide access to several families of compilers on BlueBEAR, and their use will depend on the application you are compiling.
GNU Compiler Collection¶
Generally, we recommend that most people start by using the GNU family
of compilers (along with FFTW, OpenMPI and OpenBLAS), which can be
accessed via the foss
toolchain:
module load bear-apps/2022b foss/2022b
And then, to build, execute the appropriate compiler command:
-
Compiling C, C++ and Fortran applications:
gcc -o my_c_app my_c_app.c
g++ -o my_cpp_app my_cpp_app.cpp
gfortran -o my_fortran_app my_fortran_app.f90
Note
If you find
gcc
,g++
, orgfortran
not making the output executable by default then add-fuse-ld=bfd
to your compile flags. -
Compiling C, C++ and Fortran MPI applications:
mpicc -o my_mpi_c_app my_mpi_c_app.c
mpicxx -o my_mpi_cpp_app my_mpi_cpp_app.cpp
mpifort -o my_mpi_fortran_app my_mpi_fortran_app.f90
To get the best performance for your application on each of the
node types, you must pass flags which tell the compiler to
generate efficient code. Generally, you will want to specify at least
the -O2
flag. The build scripts for many scientific packages will also
add the flag -march=native
to compilation commands, which tells the
compiler to build for the processor that the compiler is running on.
This is because newer processors have support for optimised additional
operations via what are known as instruction sets. We recommend that
users submit their compilation jobs (once they have fixed any errors) as
a job script, and label their build/install directory appropriately in
order to take advantage of the hardware. E.g., you could submit two
variations of the following script, changing the constraint to each of
cascadelake
, icelake
, sapphire
, and emerald
:
#!/bin/bash
#SBATCH --time=10:0
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --constraint=cascadelake
module purge; module load bluebear
module load bear-apps/2022b
module load foss/2022b
export BUILDDIR=myapplication_${BB_CPU}
gcc -o ${BUILDDIR}/myexecutable -march=native -O2 test.c
Then, in any job script, you would be able to run your processor optimised application with:
./myapplication_${BB_CPU}/myexecutable
For some external packages, a file called configure
will be found in
the source directory of the application. Usually, but not always, this
will use the tool Autoconf in order to generate a Makefile. Where this
is the case, you can specify an installation directory in your script:
./configure --prefix=installdir_${BB_CPU}
Users of the common CMake build system should create a separate build and installation directory for each architecture:
mkdir -p build_${BB_CPU}
cd build_${BB_CPU}
cmake ../path/to/application/source/directory -DCMAKE_INSTALL_PREFIX=install_${BB_CPU}
make install
Intel Parallel Studio¶
Alternatively, you can load the Intel compiler, Math Kernel Library and OpenMPI using the iomkl toolchain:
module load iomkl/2020a
or you can load the Intel compiler, Math Kernel Library and
Intel MPI using the intel
toolchain:
module load bear-apps/2022a intel/2022a
And then execute the appropriate Intel compilation command:
icc -o my_c_app my_c_app.c
icpc -o my_cpp_app my_cpp_app.cpp
ifort -o my_fortran_app my_fortran_app.f90
Note that the compilation wrapper commands for MPI applications are the same as for GCC. We do not provide the Intel MPI library on BlueBEAR and provide OpenMPI instead.
Most of the advice for the GNU compilers also applies here. It is
important to note however, that the optimisations performed by the Intel
compilers can be more aggressive than those in the GNU compilers,
resulting in better performance, but at the expense of numerical
accuracy in calculations. A particularly important flag to take note of
is the -fp-model
flag which tells the Intel Compiler how
aggressively floating point calculations can be optimised. By default,
the flag is set to -fp-model fast=1
, and this results in
calculations being less accurate than the IEE754 standard which the GNU
compilers use by default. Because of this, if you find that your code
gets different results with the Intel Compiler, you may want to adjust
this setting by using the flags -fp-model precise
, or
-fp-model strict
.
Using BlueBEAR Provided Compiled Libraries¶
Loading the Appropriate Modules¶
When you load a compiler toolchain like foss, or iomkl, you should
generally use versions of libraries which have been compiled with the
same compiler. Some libraries will be labelled with the toolchain they
are compiled with, e.g. PETSc/3.15.1-foss-2021a
, but for others,
you will need to directly specify the compiler version instead.
Toolchain | Compiler Versions |
---|---|
foss/2019a | GCC 8.2.0 |
foss/2019b | GCC 8.3.0 |
foss/2020a | GCC 9.3.0 |
foss/2020b | GCC 10.2.0 |
foss/2021a | GCC 10.3.0 |
foss/2021b | GCC 11.2.0 |
foss/2022a | GCC 11.3.0 |
foss/2022b | GCC 12.2.0 |
foss/2023b | GCC 12.3.0 |
iomkl/2019a | Intel 2019.1.144 and GCC 8.2.0 |
iomkl/2019b | Intel 2019.5.281 and GCC 8.3.0 |
iomkl/2020a | Intel 2020.1.217 and GCC 9.3.0 |
iomkl/2020b | Intel 2020.4.304 and GCC 10.2.0 |
iomkl/2021a | Intel 2021.2.0 and GCC 10.3.0 |
iomkl/2021b | Intel 2021.4.0 and GCC 11.2.0 |
intel/2022a | Intel 2022.1.0 and GCC 11.3.0 |
intel/2022b | Intel 2022.2.1 and GCC 12.2.0 |
intel/2023a | Intel 2023.1.0 and GCC 12.3.0 |
To take an example; we provide several variants of the GNU Scientific Library as a module called GSL. If you wanted to use this in your code which was being compiled with the foss/2022a toolchain, you would need to load the GSL module as:
module load foss/2022a
module load GSL/2.7-GCCcore-12.2.0
Mixing Modules from Different BEAR Apps Versions¶
It is not possible to load modules from different BEAR Apps versions simultaneously. Where an attempt to load conflicting modules is made you will see an error message, reporting the incompatibility. For example:
$ module load cURL/7.69.1-GCCcore-9.3.0
GCCcore/9.3.0
zlib/1.2.11-GCCcore-9.3.0
cURL/7.69.1-GCCcore-9.3.0
$ module load cURL/7.76.0-GCCcore-10.3.0
Lmod has detected the following error:
The module load command you have run has failed, as it would result in an incompatible mix of
modules.
You have the "cURL/7.69.1-GCCcore-9.3.0" module already loaded and the module load command you
have run has attempted to load "cURL/7.76.0-GCCcore-10.3.0".
This is due to how software dependency chains are managed on BlueBEAR.
If you receive the above incompatible mix of modules
error message then you will need
to modify your module load
commands to ensure compatibility. The BEAR Applications website
shows the associated “BEAR Apps Version” for each module to assist with this process.
Using Loaded Libraries¶
We recommend the tool pkg-config
, which allows you to query
the flags which should be passed to the compiler. For example, a user
wishing to use the GNU Scientific Library can do so by querying the
include path and the library path from pkg-config
:
module load foss/2022a
module load GSL/2.7-GCCcore-12.2.0
gcc -I$(pkg-config gsl --variable=includedir) $(pkg-config gsl --libs) test.c
You can see a list of all of the variables available for a particular package by running:
pkg-config <packagename> --print-variables
For some packages, the default settings returned by pkg-config
may not
be optimal. This is especially the case where the library provides
multiple versions - e.g. where there exists both a serial version
and a parallelised version - an example of this is the FFTW library. We
encourage you to check that the flags and list of libraries returned are
correctly specified.
If you need more control over the flags, we provide
environment variables which store the directory to every library we
provide which start with the prefix EBROOT<packagename>
. For example,
you can find all of the libraries and header files for the GNU
Scientific Library within the location ${EBROOTGSL}.
Build Tools¶
Most real world projects invoke compilers via build tools to simplify the building process. There are many choices, and we try to provide up-to-date versions of these. While you may find that there are versions available immediately after logging in to BlueBEAR, these are those provided and required by the operating system and are generally older versions which many projects no longer support. Because of this, we strongly recommend that you load the following tools from the modules system:
With all of these tools, you may find that they do not choose the compiler that you want by default - e.g. you may wish to use the Intel compilers and find that the build tools instead choose the GNU Compilers. In this case, you will need to specify the compiler. Each tool has its own way of doing this:
Tool | Basic Invocation |
---|---|
make | CC=gcc CXX=g++ FORT=gfortran make |
CMake | cmake . -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ -DCMAKE_FC_COMPILER=gfortran |
Autotools | ./configure CC=gcc CXX=g++ FC=gfortran |
Bazel | CC=gcc bazel |
Please note that Ninja build files are generally outputted by another tool like CMake and so should be regenerated rather than trying to specify the compiler to them directly.
It is worth noting that build tools will often try to find external
dependencies in their configuration stage. Sometimes this
is automatic; e.g. with CMake it will often find things
specified in the pkg-config
paths and so will detect BlueBEAR modules
that you have loaded. However, you may need to specify locations to
dependencies yourself by either modifying the scripts or by passing
variables to the tool if it is not set up to automatically detect the
package you are loading. Using the EBROOTPACKAGENAME
variables is
usually helpful for this.