Submission Evaluation Logic (Lua-based) ======================================= This section outlines the internal logic used in GRAVITON to validate and configure submitted jobs. A custom Lua script is integrated into SLURM's job submission pipeline, enforcing site policies such as partition selection, memory allocation, and input validation. This system ensures that all jobs conform to the resource management model of GRAVITON and provides a predictable, fair scheduling environment for all users. Key Rules Enforced ------------------ - **Only users with a valid Accounting group** (`SOM`, `COM`, or `EXT`) are allowed to submit jobs. - **The following SBATCH directives are not allowed** and will result in rejection: - ``--partition`` - ``--nodes`` - ``--mem`` - ``--mem-per-cpu`` - **Partition assignment is automatic**, based on the selected QoS: - ``--qos=lattice`` → assigned to the ``parallel`` partition - Any other valid QoS → assigned to the ``serial`` partition - **Memory per core is assigned automatically** according to the partition: - ``serial`` partition: 3.8 GB per core - ``parallel`` partition: 4.3 GB per core - If the job includes ``--constraint=double_mem``, memory per core is doubled accordingly. .. important:: Although it may seem advantageous to submit jobs under the ``lattice`` QoS — since it assigns more memory per core (4.3 GB instead of 3.8 GB) — this QoS automatically routes your job to the ``parallel`` partition. **However**, nodes ``somcosmo01`` and ``somcosmo02``, which are significantly more powerful than the standard ``grwn[01-21]`` nodes, are **not** included in the ``parallel`` partition. In contrast, all other QoS options (``hep``, ``cosmo``, ``std``) send jobs to the ``serial`` partition, which prioritizes ``somcosmo01`` and ``somcosmo02`` for job allocation. Therefore, using ``lattice`` automatically **exclude your job from accessing the most performant nodes** available in GRAVITON. Job Submission Flow Diagram ---------------------------- .. mermaid:: graph TD A[Job submitted] B{Accounting group?} B1[Prio = 10] B2[Prio = 10] B3[Prio = 2] C{Past Member?} D{--mem?
--partition?
--nodes?} H{QoS?} I[Assign partition = parallel] J[Assign partition = serial] K1[Mem = 4.3 GB x CPU] K2[Mem = 3.8 GB x CPU] L{--constraint=double_mem?} M[Mem = Mem x 2] N[Job accepted ✔] A --> B B -->|SOM| B1 --> C B -->|COM| B2 --> C B -->|EXT| B3 --> C B -->|NO| Z1[Reject ❌] C -- Yes --> Z2[Reject ❌] C -- No --> D D -- Yes --> Z2[Reject ❌] D -- No --> H H --> |lattice| I --> K1 --> L H --> |hep| J H --> |cosmo| J H --> |std| J J --> K2 --> L L -- Yes --> M --> N L -- No --> N The diagram below illustrates the complete decision process that SLURM applies to submitted jobs via Lua scripting.