Compiling
Compiling C, C++, and Fortran code on MeluXina is similar to how it is done on regular personal systems, with some differences.
One compiler suite on MeluXina is the AMD Optimizing C/C++ Compiler (AOCC), providing clang
, clang++
, and flang
frontends.
Other compilers are available such as the Intel suite, providing icc
, icpc
, and ifort
.
The GNU Compiler Collection is also available on MeluXina, providing gcc
, g++
, and gfortran
. GNU compiler compatibility is ubiquitous across free and open-source software projects, which includes much scientific software.
Generic compilation steps
A generic workflow must be followed to compile any source code or software on your own. After reserving a node on MeluXina, go to the directory with your code sources. We recommend temporarily placing your sources and compiling under the $SCRATCH
directory where you will get better performance.

Using command-line
From serial
code to parallel OpenMP
and MPI
code the different step to compile a source code are detailed in the following for different languages such as C
, C++
and Fortran
Serial
Source code
1 2 3 4 5 6 7 |
|
Compiling and Executing
On a reserved node
module load foss/2021a
gcc -o helloworld helloworld.c
./helloworld
Output
Output from the execution
Hello world!
1 2 3 4 5 6 7 |
|
Compiling and Executing
On a reserved node
module load foss/2021a
g++ -o helloworld helloworld.cpp
./helloworld
Output
Output from the execution
Hello world!
1 2 3 |
|
Compiling and Execting
On a reserved node
module load foss/2021a
gfortran -o helloworld helloworld.f
./helloworld
Output
Output from the execution
Hello world!
OpenMP
There are a variety of technologies available that can be used to write a parallel program and some combination of two or more of these technologies. While the clusters are capable of running programs developed using any of these technologies, The following examples tutorial will focus on the OpenMP and Message Passing Interface (MPI) compilation
Source code
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Compiling and Executing
On a reserved node
module load foss/2021a
gcc -o helloworld_omp helloworld_OMP.c -fopenmp
export OMP_NUM_THREADS=8
./helloworld_omp
Output
Output from the execution
Hello World... from thread = 0
Hello World... from thread = 6
Hello World... from thread = 7
Hello World... from thread = 3
Hello World... from thread = 5
Hello World... from thread = 2
Hello World... from thread = 1
Hello World... from thread = 4
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Compiling and Executing
On a reserved node
module load foss/2021a
g++ -o helloworld_omp helloworld_OMP.cpp -fopenmp
export OMP_NUM_THREADS=8
./helloworld_omp
Output
Output from the execution
Hello World... from thread = 0
Hello World... from thread = 2
Hello World... from thread = 6
Hello World... from thread = 4
Hello World... from thread = 7
Hello World... from thread = 3
Hello World... from thread = 5
Hello World... from thread = 1
1 2 3 4 5 6 7 8 9 10 |
|
Compiling and Executing
On a reserved node
module load foss/2021a
gfortran -o helloworld_omp helloworld_OMP.f -fopenmp
export OMP_NUM_THREADS=8
./helloworld_omp
Output
Output from the execution
Hello World... from thread = 3
Hello World... from thread = 0
Hello World... from thread = 2
Hello World... from thread = 6
Hello World... from thread = 1
Hello World... from thread = 4
Hello World... from thread = 7
Hello World... from thread = 5
Message Passing Interface (MPI)
MPI is the technology you should use when you wish to run your program in parallel on multiple cluster compute nodes simultaneously. Compiling an MPI program is relatively easy. However, writing an MPI-based parallel program takes more work.
Source code
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Compiling and Executing
On a reserved node
module load foss/2021a
mpicc -o helloworld_mpi helloworld_MPI.c
mpirun -np 4 ./helloworld_mpi
Output
Output from the execution
hello world from process 1 of 4
hello world from process 2 of 4
hello world from process 0 of 4
hello world from process 3 of 4
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
Compiling and Executing
On a reserved node
module load foss/2021a
mpic++ -o helloworld_mpi helloworld_MPI.cpp
mpirun -np 4 ./helloworld_mpi
Output
Output from the execution
hello world from process 2 of 4
hello world from process 3 of 4
hello world from process 1 of 4
hello world from process 0 of 4
1 2 3 4 5 6 7 8 9 10 |
|
Compiling and Executing
On a reserved node
module load foss/2021a
mpifort -o helloworld_mpi helloworld_MPI.f90
mpirun -np 4 ./helloworld_mpi
Output
Output from the execution
hello world from process 2 of 4
hello world from process 3 of 4
hello world from process 1 of 4
hello world from process 0 of 4
Compiling for MeluXina FPGAs
The MeluXina Accelerator Module includes a partition with FPGA accelerators.
Compiling code for these accelerators requires specific steps and configurations outlined below. We are basing the examples on OpenCL and OneAPI code samples offered by BittWare and Intel.
In the following example we will build and run an OpenCL 2D FFT example for the FPGA target.
-
Prepare code sample
Copy the OpenCL FFT2D example from BittWare to your own folder:
mkdir ~/fpga-tests/ module load fpga-samples/1.0.0 cd ~/fpga-tests cp -r $FPGA_SAMPLE_PATH/OpenCL .
-
Load the necessary software environment and modules for building FPGA code
module load ifpgasdk module load 520nmx module list
-
Compile the OpenCL example
To build the design you need to run the build script in the device folder, which will contain a compilation command such as
aoc -board=p520_hpc_m210h_g3x16 -fp-relaxed -DINTEL_CL -o fft2d_mx FFT_2d.cl
. Note that generating the aocx binary will take several hours (!).cd OpenCL/ReferenceDesigns/fft/device ./build.sh
Now, to compile the host code run make in the root directory:
cd .. make
-
Run the 2D FFT example
At the previous stage the host application binary was generated, and can now be run:
./ff2d_opencl.exe
Note
Compiling OneAPI code requires additional software packages currently being tested. If you are interested in this possibility please get in touch via our service desk.