Resource Architecture and QoS Policies
This section describes the resource allocation architecture of GRAVITON, including how user groups, partitions, and QOS (Quality of Service) levels interact to determine job eligibility and priority.
User Accounts and Groups
Users in GRAVITON belong to one of the following accounting groups, which define their resource access and scheduling priority:
Account |
Description |
Priority |
---|---|---|
|
Subgroup SOM |
10 |
|
Subgroup COM |
10 |
|
External users |
2 |
Cada usuario está asociado a un único grupo. Esto es simplemente para la monitorización y las estadisticas, ya que no se aplica fairshare por grupo, solo a nivel de usuario.
Partitions and QoS Mapping
Resource usage in GRAVITON is controlled by a combination of partitions and QOS policies. Users do not select partitions directly; instead, the system automatically assigns jobs based on:
Requested number of tasks (
--ntasks
)Requested number of CPUs per task (
--cpus-per-task
)Selected QOS (
--qos
)Internal constraints defined in SLURM configuration
QOS |
Intended Use |
CPU Range |
Max Nodes per Job |
Max Time |
Partition |
---|---|---|---|---|---|
|
Lightweight jobs (QoS by default) |
1–8 |
1 |
12h |
|
|
Medium-sized serial/parallel jobs |
1–56 |
1 |
48h |
|
|
At this time, equivalent to |
1–56 |
1 |
48h |
|
|
Large-scale MPI jobs (InfiniBand) |
≥ 57 |
∞ |
48h |
|
Partition Architecture
GRAVITON is composed of two main resource pools:
serial
: Intended for jobs requiring a single node. Cores are allocated from:grwn[01-21]: 56-core nodes with 200GbE Infiniband
somcosmo[01-02]: 96-core nodes with 25GbE Ethernet
parallel
: Designed for distributed jobs needing multiple nodes with high-speed interconnect. Only nodes with InfiniBand:grwn[01-21]
Each partition is isolated to optimize job scheduling and performance depending on the type of workload.
Policy Enforcement
To ensure consistency and fairness in job scheduling:
Users must not specify
--nodes
,--mem
, or--partition
.The QOS defines implicit limits on CPUs and node count.
Memory constraints are enforced dynamically using SLURM’s Lua plugin.
The system rejects jobs that violate QOS restrictions.
Memory Policy
GRAVITON manages memory allocation automatically based on the selected QOS and the resulting partition. Users are not allowed to manually specify memory using directives like --mem
or --mem-per-cpu
. Instead, memory is assigned implicitly according to a predefined policy:
Default Memory Allocation
Serial Partition (QOS:
std
,hep
,cosmo
):3.8 GB per core
Parallel Partition (QOS:
lattice
):4.3 GB per core
This means that, for example, a job requesting 20 cores with --qos=hep
will receive approximately 76 GB of total memory.
Important
Although it may seem advantageous to submit jobs under the lattice
QoS — since it assigns more memory per core (4.3 GB instead of 3.8 GB) — this QoS automatically routes your job to the parallel
partition.
However, nodes somcosmo01
and somcosmo02
, which are significantly more powerful than the standard grwn[01-21]
nodes, are not included in the parallel
partition.
In contrast, all other QoS options (hep
, cosmo
, std
) send jobs to the serial
partition, which prioritizes somcosmo01
and somcosmo02
for job allocation.
Therefore, using lattice
automatically exclude your job from accessing the most performant nodes available in GRAVITON.
Requesting Double Memory
If your job requires double the default memory per core, you can request it by adding the following constraint to your job script:
#SBATCH --constraint=double_mem
This constraint applies to both partitions, and will result in:
7.6 GB per core in the serial partition
8.6 GB per core in the parallel partition
Important Notes
There is no need to manually request memory with
--mem
or similar options — doing so will result in job rejection.Use the
double_mem
constraint only when justified, as it reduces node availability and may delay your job.You can use the
grstatus
command to inspect memory allocation across nodes.