This is an old revision of the document!
Interactive Jobs
In SLURM interactive jobs are possible by using the salloc/srun commands.
salloc
enables node reservation
salloc -N 2 # reserves 2 nodes
It is also possible specifying the desired QOS and partition
salloc -N 2 --qos=skylake_0096_devel --partition=skylake_0096
The command
srun hostname
returns the hostnames of the reserved compute nodes. After allocation of the compute nodes, SSH access to these nodes is possible, e.g.,
ssh n4908-006 # n408-006 has to be replaced by the hostname of the reserved nodes
Logout from the compute node is achieved by typing exit
in the shell of the compute node, however, the allocated node session is not terminated.
To terminate the session, it is necessary to run
scancel <job_id>
Direct access on the compute nodes: Alternatively, salloc
can be used with the following options
salloc -N2 srun -n1 -N1 --cpu_bind=none --mem-per-cpu=0 --pty --preserve-env --mpi=none $SHELL
enabling that you directly have a session in one of the allocated nodes (no SSH needed any more).
You can end the session by typing exit
on the shell of the compute node. Here exit
also terminates the job.