Skip to content

Compiling

Compiling C, C++, and Fortran code on MeluXina is similar to how it is done on regular personal systems, with some differences.

One compiler suite on MeluXina is the AMD Optimizing C/C++ Compiler (AOCC), providing clang, clang++, and flang frontends. Other compilers are available such as the Intel suite, providing icc, icpc, and ifort. The GNU Compiler Collection is also available on MeluXina, providing gcc, g++, and gfortran. GNU compiler compatibility is ubiquitous across free and open-source software projects, which includes much scientific software.

Generic compilation steps

A generic workflow must be followed to compile any source code or software on your own. After reserving a node on MeluXina, go to the directory with your code sources. We recommend temporarily placing your sources and compiling under the $SCRATCH directory where you will get better performance.

Generic compilation steps

Using command-line

From serial code to parallel OpenMP and MPI code the different step to compile a source code are detailed in the following for different languages such as C, C++ and Fortran

Serial

Source code

1
2
3
4
5
6
7
#include <stdio.h>

int main(void) 
{
  printf("Hello world!\n");
  return 0;
}
Compiling and Executing

On a reserved node

module load foss/2021a
gcc -o helloworld helloworld.c 
./helloworld
Output

Output from the execution

Hello world!

1
2
3
4
5
6
7
#include <iostream>

int main(void) 
{
  std::cout << "Hello world!" << std::endl;
  return 0;
}
Compiling and Executing

On a reserved node

module load foss/2021a
g++ -o helloworld helloworld.cpp 
./helloworld
Output

Output from the execution

Hello world!
1
2
3
program hello
      print *, "Hello World!"
end program
Compiling and Execting

On a reserved node

module load foss/2021a
gfortran -o helloworld helloworld.f
./helloworld
Output

Output from the execution

Hello world!

OpenMP

There are a variety of technologies available that can be used to write a parallel program and some combination of two or more of these technologies. While the clusters are capable of running programs developed using any of these technologies, The following examples tutorial will focus on the OpenMP and Message Passing Interface (MPI) compilation

Source code

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#include <stdio.h> 
#include <stdlib.h> 
#include <omp.h>  

int main(void) 
{ 
    // Beginning of parallel region 
    #pragma omp parallel 
    { 
        printf("Hello World... from thread = %d\n", 
               omp_get_thread_num()); 
    } 
    // Ending of parallel region 
    return 0;
} 
Compiling and Executing

On a reserved node

module load foss/2021a
gcc -o helloworld_omp helloworld_OMP.c -fopenmp
export OMP_NUM_THREADS=8
./helloworld_omp
Output

Output from the execution

Hello World... from thread = 0
Hello World... from thread = 6
Hello World... from thread = 7
Hello World... from thread = 3
Hello World... from thread = 5
Hello World... from thread = 2
Hello World... from thread = 1
Hello World... from thread = 4
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
#include <iostream>
#include <omp.h> 

int main(void) 
{
    // Beginning of parallel region 
    #pragma omp parallel 
    { 
       std::cout << "Hello World... from thread = " << omp_get_thread_num() << std::endl;, 
    } 
    // Ending of parallel region 
    return 0;
}
Compiling and Executing

On a reserved node

module load foss/2021a
g++ -o helloworld_omp helloworld_OMP.cpp -fopenmp
export OMP_NUM_THREADS=8
./helloworld_omp
Output

Output from the execution

Hello World... from thread = 0
Hello World... from thread = 2
Hello World... from thread = 6
Hello World... from thread = 4
Hello World... from thread = 7
Hello World... from thread = 3
Hello World... from thread = 5
Hello World... from thread = 1
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
PROGRAM Parallel_Hello_World
USE OMP_LIB

!$OMP PARALLEL

    PRINT *, Hello World... from thread =  , OMP_GET_THREAD_NUM()

!$OMP END PARALLEL

END
Compiling and Executing

On a reserved node

module load foss/2021a
gfortran -o helloworld_omp helloworld_OMP.f -fopenmp
export OMP_NUM_THREADS=8
./helloworld_omp
Output

Output from the execution

Hello World... from thread = 3
Hello World... from thread = 0
Hello World... from thread = 2
Hello World... from thread = 6
Hello World... from thread = 1
Hello World... from thread = 4
Hello World... from thread = 7
Hello World... from thread = 5

Message Passing Interface (MPI)

MPI is the technology you should use when you wish to run your program in parallel on multiple cluster compute nodes simultaneously. Compiling an MPI program is relatively easy. However, writing an MPI-based parallel program takes more work.

Source code

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
#include <stdio.h>  
#include <stdlib.h> 
#include <mpi.h>    

int main(int argc, char** argv)
{
    int rank, size, length;
    char name[BUFSIZ];

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Get_processor_name(name, &length);

    printf("%s: hello world from process %d of %d\n", name, rank, size);

    MPI_Finalize();

    return 0;
}
Compiling and Executing

On a reserved node

module load foss/2021a
mpicc -o helloworld_mpi helloworld_MPI.c
mpirun -np 4 ./helloworld_mpi
Output

Output from the execution

hello world from process 1 of 4
hello world from process 2 of 4
hello world from process 0 of 4
hello world from process 3 of 4
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
#include <iostream>
#include <mpi.h>

int main(int argc, char **argv) 
{
  MPI_Init(&argc, &argv);

  int world_size;
  MPI_Comm_size(MPI_COMM_WORLD, &world_size);
  int my_rank;
  MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);

  std::cout << "hello world from process " << my_rank << " of " << world_size << std::endl;

  MPI_Finalize();

  return 0;
}
Compiling and Executing

On a reserved node

module load foss/2021a
mpic++ -o helloworld_mpi helloworld_MPI.cpp
mpirun -np 4 ./helloworld_mpi
Output

Output from the execution

hello world from process 2 of 4
hello world from process 3 of 4
hello world from process 1 of 4
hello world from process 0 of 4
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
program hello
include 'mpif.h'
integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)

call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
print*, 'node', rank, ': Hello world'
call MPI_FINALIZE(ierror)
end program
Compiling and Executing

On a reserved node

module load foss/2021a
mpifort -o helloworld_mpi helloworld_MPI.f90
mpirun -np 4 ./helloworld_mpi
Output

Output from the execution

hello world from process 2 of 4
hello world from process 3 of 4
hello world from process 1 of 4
hello world from process 0 of 4

Compiling for MeluXina FPGAs

The MeluXina Accelerator Module includes a partition with FPGA accelerators.
Compiling code for these accelerators requires specific steps and configurations outlined below. We are basing the examples on OpenCL and OneAPI code samples offered by BittWare and Intel.

In the following example we will build and run an OpenCL 2D FFT example for the FPGA target.

  • Prepare code sample

    Copy the OpenCL FFT2D example from BittWare to your own folder:

    mkdir ~/fpga-tests/
    module load fpga-samples/1.0.0
    cd ~/fpga-tests
    cp -r $FPGA_SAMPLE_PATH/OpenCL .
    
  • Load the necessary software environment and modules for building FPGA code

    module load ifpgasdk
    module load 520nmx
    module list
    
  • Compile the OpenCL example

    To build the design you need to run the build script in the device folder, which will contain a compilation command such as aoc -board=p520_hpc_m210h_g3x16 -fp-relaxed -DINTEL_CL -o fft2d_mx FFT_2d.cl. Note that generating the aocx binary will take several hours (!).

    cd OpenCL/ReferenceDesigns/fft/device
    ./build.sh
    

    Now, to compile the host code run make in the root directory:

    cd ..
    make
    
  • Run the 2D FFT example

    At the previous stage the host application binary was generated, and can now be run:

    ./ff2d_opencl.exe
    

Note

Compiling OneAPI code requires additional software packages currently being tested. If you are interested in this possibility please get in touch via our service desk.