Skip to content

Intel Compilers on MeluXina

The MeluXina system environment provides the Intel toolchain. It consists almost entirely of software components developed by Intel.

EasyBuild module description

Compiler toolchain including Intel compilers, Intel MPI and Intel Math Kernel Library (MKL). They allow to compile C, C++ & Fortran compilers (classic and oneAPI) codes.

ICPX (Intel DPC++) Usage

Compiling SYCL programs for GPU and FPGA accelerators

Interactive

See handling interactive jobs for further details. Reserve an interactive session:

On GPU:

salloc -A COMPUTE_ACCOUNT -t 01:00:00 -q dev --res gpudev -p gpu -N 1

The example above will allocate one GPU node in interactive mode for 1 hour (dev QoS with cpudev reservation). Load the Intel (Will load default version if not specified) module as in below script.

module load env/staging/2024.1
# SYCL module
module load intel-oneapi
icpx -fsycl -fsycl-targets=nvptx64-nvidia-cuda hello_gpu.cpp -o  hello_gpu

On FPGA:

See FPGA compiling and 1API quantum for more details about compiling and building FPGA images.

salloc -A COMPUTE_ACCOUNT -t 01:00:00 -q dev --res fpgadev -p fpga -N 1

The example above will allocate one FPGA node in interactive mode (dev QoS with cpudev reservation). Load the Intel (Will load default version if not specified) module as in below script.

module load env/staging/2024.1
module load intel-oneapi
# FPGA extensions
module load intel-fpga
# Bittware specific module
module load 520nmx/20.4
icpx -fsycl -fintelfpga -qactypes -Xshardware -Xsboard=p520_hpc_m210h_g3x16 -DFPGA_HARDWARE hello_fpga.cpp -o hello_fpga

Batch

ICPX can also be used in a batch job using Slurm. The script below compile a simple C HelloWorld test on one CPU node allocated for 10 hours.

#!/bin/bash -l
#SBATCH --time=10:0:00
#SBATCH --account=projectAccount
#SBATCH --partition=gpu
#SBATCH --qos=default
#SBATCH --nodes=1

module load env/staging/2024.1

# GPU
module load intel-oneapi
icpx -fsycl -fsycl-targets=nvptx64-nvidia-cuda hello_gpu.cpp -o  hello

# FPGA
# module load intel-oneapi
# module load intel-fpga                                                                                                        
# module load 520nmx/20.4
# icpx -fsycl -fintelfpga -qactypes -Xshardware -Xsboard=p520_hpc_m210h_g3x16 -DFPGA_HARDWARE hello_fpga.cpp -o hello

srun ./hello

Makefiles

It is recommended to use a Makefile to compile your program. Your Makefile should look like the example below:

#Defines compiler and target (GPU)
CC=icpx
TARGET=nvptx64-nvidia-cuda

# These are the options we pass to the compiler.
# -std=c++17 means we want to use the C++17 standard.
# -g specifies that we want to include "debugging symbols" which allows us to use a debugging program.
# -O0 specifies to do no optimizations on our code.
CFLAGS = -std=c++17 -g -O0 -fsycl -fsycl-targets=$(TARGET)

all: hello

hello: hello.o:
        $CC $(CFLAGS) -o hello_c hello.o

hello.o: hello.cpp
        $CC $(CFLAGS) -c hello.c

It is also possible to compile via a Makefile inside a batch job:

#!/bin/bash -l
#SBATCH -N 1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=64
#SBATCH --gpus-per-task=1
#SBATCH -p gpu
#SBATCH -q test
#SBATCH --time 10:00:00

#Load Intel modules
module load env/staging/2024.1

#Load GPU Module
module load intel-oneapi

#Load FPGA Module
#module load intel-oneapi
#module load intel-fpga
#module load 520nmx/20.4

#Check Intel version
icpx --version

#Compile C program with Intel (in parallel)
make -j

#Execute the program
./hello_c

Import ncurses in order to build for Cmake toolchain

module load env/staging/2023.1
module load ncurses/5.9
#module load env/staging/2024.1
#module load ncurses/6.5