Software libraries (generic)
The MeluXina User Software environment offers many software libraries that can be used in your C/C++, Fortran, Python and other applications. This page showcases how you can use them when building your application.
First you will need to activate the corresponding environment module with module load <library/version>
.
The modules populate your shell's environment with several variables, one of which is called EBROOT<LIBRARYNAME>
.
You may use env | grep EBROOT
to see all the variables that appear once a module is loaded, note that any dependencies of the module will also create this type of entries.
The EBROOT*
variable contains the library's installation path, and can be used in your commands and scripts ($EBROOT<NAME>
) to point to specific folders available underneath and link your application to the library (e.g. $EBROOT<NAME>/lib
, $EBROOT<NAME>/include
, etc.).
It is preferred to use the variable as the reference to the installation path, instead of hard-coding the full path, which may change.
We provide two different examples for data management and GPU-accelerated libraries:
- HDF5 for file format and data model management
- cuDNN is the NVIDIA Deep Neural Network for forward and backward convolution, pooling and normalization.
Example using HDF5
EasyBuild module description
HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of data-types, and is designed for flexible and efficient I/O and for high volume and complex data.
Available HDF5 versions
Check the available versions on MeluXina with module command:
module avail HDF5
Terminal output example
------------------------- /apps/USE/easybuild/release/latest/modules/all --------------------------
HDF5/1.10.7-gompi-2020b HDF5/1.12.0-gompi-2020b
HDF5/1.10.7-gompic-2020b HDF5/1.12.1-gompi-2020b (L,D)
Where:
L: Module is loaded
D: Default Module
For example, in the MeluXina 2021.2 software stack you will find:
HDF5 - serial | MPI(OpenMPI) - parallel |
---|---|
HDF5/1.10.7-gompi-2020b | |
HDF5/1.10.7-gompic-2020b (CUDA) | |
HDF5/1.12.0-gompi-2020b | |
HDF5/1.12.1-gompi-2020b |
Compilation of code using HDF5 library
Load a serial or MPI-parallel build of HDF5 in order to compile your program:
module load HDF5/1.12.1-gompi-2020b
The EBROOTHDF5
and HDF5_DIR
environment variables are then populated in your shell's environment and point to the HDF5 installation dir:
echo $EBROOTHDF5
Terminal output example
/apps/USE/easybuild/release/2021.2/software/HDF5/1.12.1-gompi-2020b/
You can now compile your C/C++/Fortran code (e.g. writedata.cpp
) step by step, using the MPI C++ compiler wrapper:
mpicxx -c -I/$EBROOTHDF5/include writedata.cpp
and then link HDF5 library:
mpicxx -o writedata writedata.o -L$EBROOTHDF5/lib -lhdf5_cpp -lhdf5
Note that HDF5 contains helper utilities that can be used to compile in a single step:
h5c++ -o writedata writedata.cpp
Example using cuDNN
EasyBuild module description
The NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks.
Available HDF5 versions
Check the available versions on MeluXina with module command:
module avail cuDNN
Terminal output example
-------------------------- /apps/USE/easybuild/release/latest/modules/all --------------------------
cuDNN/8.0.4.30-CUDA-11.1.1 cuDNN/8.2.1.32-CUDA-11.3.1
cuDNN/8.1.1.33-CUDA-11.2.2 cuDNN/8.2.2.26-CUDA-11.4.1 (L,D)
Where:
L: Module is loaded
D: Default Module
For example, in the MeluXina 2021.2 software stack you will find:
cuDNN/8.0.4.30-CUDA-11.1.1
cuDNN/8.1.1.33-CUDA-11.2.2
cuDNN/8.2.1.32-CUDA-11.3.1
cuDNN/8.2.2.26-CUDA-11.4.1
Compilation of code using cuDNN library
Load one of the available version in order to compile your program:
module load cuDNN/8.2.2.26-CUDA-11.4.1
The EBROOTCUDNN
environment variables are then populated in your shell's environment and point to the cuDNN installation dir:
echo $EBROOTCUDNN
Terminal output example
/apps/USE/easybuild/release/2021.2/software/cuDNN/8.2.2.26-CUDA-11.4.1
You can now compile your cuda code (e.g. pooling.cu
) step by step, using CUDA compiler:
nvcc -c -I/$EBROOTCUDNN/include pooling.cpp
and then link cuDNN library:
nvcc -o pooling pooling.o -L$EBROOTCUDNN/lib -lcudnn -lcudnn_static