This version is outdated by a newer approved version.DiffThis version (2015/06/09 13:07) is a draft.
Approvals: 0/1

This is an old revision of the document!


Interactive Jobs

In SLURM interactive jobs are possible by using the salloc/srun commands. salloc enables node reservation

salloc -N 2                  #    reserves 2 nodes

It is also possible specifying the desired QOS and partition

salloc -N 2 --qos=admin_0128 --partition=mem_0128

The command

srun hostname               

returns the hostnames of the reserved compute nodes. After allocation of the compute nodes, SSH access to these nodes is possible, e.g.,

ssh n16-006   #     n16-006 has to be replaced by the hostname of the reserved nodes        

Logout from the compute node is achieved by typing exit in the shell of the compute node, however, the allocated node session is not terminated.

Direct access on the compute nodes: Alternatively, salloc can be used with the following options

salloc -N2 srun -n1 -N1 --cpu_bind=none --mem-per-cpu=0 --pty --preserve-env --mpi=none $SHELL

enabling that you directly have a session in one of the allocated nodes (no SSH needed any more). You can end the session by typing exit on the shell of the compute node. Here, exit also terminates the job.

  • doku/slurm_interactive.1433855245.txt.gz
  • Last modified: 2015/06/09 13:07
  • by ir