This version is outdated by a newer approved version.DiffThis version (2022/10/05 06:36) was approved by dieter.The Previously approved version (2022/10/04 07:33) is available.Diff

This is an old revision of the document!


Interactive Jobs

In SLURM interactive jobs are possible by using the salloc/srun commands. salloc enables node reservation

salloc -N 2                  #    reserves 2 nodes

It is also possible specifying the desired QOS and partition

salloc -N 2 --qos=devel_0096 --partition=mem_0096

The command

srun hostname               

returns the hostnames of the reserved compute nodes. After allocation of the compute nodes, SSH access to these nodes is possible, e.g.,

ssh n408-006   #     n408-006 has to be replaced by the hostname of the reserved nodes        

Logout from the compute node is achieved by typing exit in the shell of the compute node, however, the allocated node session is not terminated. To terminate the session, it is necessary to run

scancel <job_id>

Direct access on the compute nodes: Alternatively, salloc can be used with the following options

salloc -N2 srun -n1 -N1 --cpu_bind=none --mem-per-cpu=0 --pty --preserve-env --mpi=none $SHELL

enabling that you directly have a session in one of the allocated nodes (no SSH needed any more). You can end the session by typing exit on the shell of the compute node. Here exit also terminates the job.

  • doku/slurm_interactive.1664951780.txt.gz
  • Last modified: 2024/10/24 10:21
  • (external edit)