User Interface
Users can log in to GRAVITON through the User Interface. To do this, simply execute via CLI:
user@local:~$ ssh username@grui01.ific.uv.es
Automatically, we will enter the User Interface of GRAVITON.
=========================================================
Welcome to
____ ____ ___ _____ _____ ___ _ _
/ ___| _ \ / \ \ / /_ _|_ _/ _ \| \ | |
| | _| |_) | / _ \ \ / / | | | || | | | \| |
| |_| | _ < / ___ \ V / | | | || |_| | |\ |
\____|_| \_\/_/ \_\_/ |___| |_| \___/|_| \_|
The SOM's parallel computing infrastructure
==========================================================
**Information**
User Interface : grui01.ific.uv.es
OS : AlmaLinux 9.5 (Teal Serval)
MPI Version : mpirun (Open MPI) 4.1.7rc1
MPI Path : /usr/mpi/gcc/openmpi-4.1.7rc1/bin
Job Scheduler : Slurm 22.05.9
Documentation : https://som.ific.uv.es/
==========================================================
**Useful Commands**
grstatus : summary of CPU and Memory utilization
grquota : summary of user disk quotas
squeue -u $USER : Check your running and pending jobs
sinfo : View partition/node status
sbatch <script> : Submit a job to SLURM
scancel <jobid> : Cancel one of your jobs
==========================================================
In this environment, we can develop code and perform tests directly via CLI, keeping in mind that only the 112 virtual cores available in the User Interface will be used. If we want to utilize the power of the Worker Nodes, we must use the Job Scheduler (SLURM) of GRAVITON.
MPI on GRAVITON
Currently, version 4.1.7rc1 of OpenMPI is installed on the GRAVITON cluster. This is a precompiled version provided by Mellanox, specifically built and optimized to support the InfiniBand interconnect used in the system.
Important
Do not use your own local or system-wide MPI installation (self-compiled). It may cause communication failures or segmentation faults, especially in multi-node jobs, due to incompatibilities with the network drivers and runtime environment.
Installation Path
The OpenMPI installation provided by Mellanox can be found at:
/usr/mpi/gcc/openmpi-4.1.7rc1/bin
This directory contains all standard MPI compiler wrappers and execution tools, including:
mpicc / mpic++ / mpif90: for compiling MPI applications in C/C++/Fortran.
mpirun: for launching MPI-enabled executables across multiple nodes.
To ensure the correct version of OpenMPI is used, it is recommended to add this path to your environment:
export PATH=/usr/mpi/gcc/openmpi-4.1.7rc1/bin:$PATH
Compiling and Running a Simple MPI Program
Let’s assume we want to compile and run a basic C++ MPI program called hello_world_mpi.cpp that uses the MPI header mpi.h.
The source code should follow the typical MPI program structure, including:
Initialization of the MPI environment
Obtaining the number of processes and the rank of each
Finalizing the MPI environment
#include <iostream>
#include <mpi.h>
int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
if (world_rank == 0) {
std::cout << "Hello World from the main process (rank 0) of " << world_size << " processes." << std::endl;
} else {
std::cout << "Hello World from process " << world_rank << " of " << world_size << "." << std::endl;
}
MPI_Finalize();
return 0;
}
This is a simple code that launches the classic “hello world” across different cores. To work with it, we must use Open MPI. First, we compile the code using the Open MPI C++ compiler. The command used is mpicxx
. Here is an example:
username@grui01:~$ mpicxx -o hello_world_mpi hello_world_mpi.cpp
Once compiled, we can now execute it and launch it across the number of cores we want. To execute this code directly in the CLI, we will use the command mpirun
:
username@grui01:~$ mpirun -n 4 ./hello_world_mpi
In this example, we launch the code across 4 cores. The output we will obtain will be:
Hello World from the main process (rank 0) of 4 processes.
Hello World from the main process 2 de 4.
Hello World from the main process 1 de 4.
Hello World from the main process 3 de 4.
As already mentioned, GRAVITON allows for testing on the UI node, which has 112 virtual cores. To utilize the Worker Nodes, it is necessary to use the queue manager SLURM.