This page is read only. You can view the source, but not change it. Ask your administrator if you think this is wrong. ====== Interactive Jobs ====== In SLURM interactive jobs are possible by using the salloc/srun commands. ''salloc'' enables node reservation <code> salloc -N 2 # reserves 2 nodes </code> It is also possible specifying the desired QOS and partition <code> salloc -N 2 --qos=skylake_0096_devel --partition=skylake_0096 </code> The command <code> srun hostname </code> returns the hostnames of the reserved compute nodes. After allocation of the compute nodes, SSH access to these nodes is possible, e.g., <code> ssh n4908-006 # n408-006 has to be replaced by the hostname of the reserved nodes </code> Logout from the compute node is achieved by typing ''exit'' in the shell of the compute node, however, the allocated node session is not terminated. To terminate the session, it is necessary to run <code> scancel <job_id> </code> **Direct access on the compute nodes:** Alternatively, ''salloc'' can be used with the following options <code>salloc -N2 srun -n1 -N1 --cpu_bind=none --mem-per-cpu=0 --pty --preserve-env --mpi=none $SHELL</code> enabling that you directly have a session in one of the allocated nodes (no SSH needed any more). You can end the session by typing ''exit'' on the shell of the compute node. Here ''exit'' also terminates the job. doku/slurm_interactive.txt Last modified: 2023/03/14 12:57by goldenberg