User Interface
Users can log in to GLUON through the User Interface. To do this, simply execute via CLI:
user@local:~$ ssh gluon_user@glui01.ific.uv.es
Automatically, we will enter the User Interface of GLUON.
Welcome to
_______ __ __ __ ______ .__ __.
/ _____|| | | | | | / __ \ | \ | |
| | __ | | | | | | | | | | | \| |
| | |_ | | | | | | | | | | | | . |
| |__| | | ----.| -- | | -- | | |\ |
\______| |_______| \______/ \______/ |__| \__|
The IFIC parallel computing infrastructure
==========================================================
**Information**
OS: CentOS Linux 7 (Core)
MPI Version: mpirun (Open MPI) 4.1.5a1
Job Scheduler: HTCondor V.10.9.0
GLUON Manual: https://gluon.ific.uv.es
==========================================================
**Useful Commands**
CPUstatus: summary on CPU utilization
==========================================================
In this environment, we can develop code and perform tests directly via CLI, keeping in mind that only the 96 cores/ 192 threads available in the User Interface will be used. If we want to utilize the power of the Worker Nodes, we must use the Job Scheduler of GLUON.
MPI Test example
Currently, version 4.1.5a1 of OpenMPI is installed on GLUON. Let’s imagine we want to test the C++ code hello_world_mpi.cpp using the MPI library mpi.h
. The C++ script will have the following structure:
#include <iostream>
#include <mpi.h>
int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
if (world_rank == 0) {
std::cout << "Hello World from the main process (rank 0) of " << world_size << " processes." << std::endl;
} else {
std::cout << "Hello World from process " << world_rank << " of " << world_size << "." << std::endl;
}
MPI_Finalize();
return 0;
}
This is a simple code that launches the classic “hello world” across different cores. To work with it, we must use Open MPI. First, we compile the code using the Open MPI C++ compiler. The command used is mpicxx
. Here is an example:
[gluon_user@glui01 ~]$ /usr/mpi/gcc/openmpi-4.1.5a1/bin/mpicxx -o hello_world_mpi hello_world_mpi.cpp
Once compiled, we can now execute it and launch it across the number of cores we want. To execute this code directly in the CLI, we will use the command mpirun
:
[gluon_user@glui01 ~]$ /usr/mpi/gcc/openmpi-4.1.5a1/bin/mpirun -np 4 -mca coll_hcoll_enable 0 ./hello_world_mpi
In this example, we launch the code across 4 cores. The output we will obtain will be:
Hello World from the main process (rank 0) of 4 processes.
Hello World from the main process 2 de 4.
Hello World from the main process 1 de 4.
Hello World from the main process 3 de 4.
As already mentioned, GLUON allows for testing on the UI node, which has 96 cores / 192 threads. To utilize the Worker Nodes, it is necessary to use the queue manager HTCondor.