Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
doku:slurm [2023/03/14 12:15] – [Node configuration - hyperthreading] goldenberg | doku:slurm [2023/03/14 12:52] – [Licenses] goldenberg | ||
---|---|---|---|
Line 25: | Line 25: | ||
And the primary nodes of VSC-5 with: | And the primary nodes of VSC-5 with: | ||
< | < | ||
- | CoresPerSocket=24 | + | CoresPerSocket=64 |
Sockets=2 | Sockets=2 | ||
ThreadsPerCore=2 | ThreadsPerCore=2 | ||
Line 41: | Line 41: | ||
==== Node allocation policy ==== | ==== Node allocation policy ==== | ||
- | On VSC-3 (as on VSC-2) < | + | On VSC-4 there are a set of nodes which accept jobs that do not require entire nodes (anythong from 1 core to less than a full node). These nodes are set up to accomodate different jobs from different users until they are full. They are automatically used for such types of jobs. All other nodes are assigned completely |
+ | On VSC-5 that feature is not yet active, so only complete nodes are assigned | ||
Line 54: | Line 55: | ||
==== The job submission script==== | ==== The job submission script==== | ||
- | It is recommended to write the job script using a [[doku: | + | It is recommended to write the job script using a [[doku: |
Editors in //Windows// may add additional invisible characters to the job file which render it unreadable and, thus, it is not executed. | Editors in //Windows// may add additional invisible characters to the job file which render it unreadable and, thus, it is not executed. | ||
Line 63: | Line 64: | ||
#SBATCH -J chk | #SBATCH -J chk | ||
#SBATCH -N 2 | #SBATCH -N 2 | ||
- | #SBATCH --ntasks-per-node=16 | + | #SBATCH --ntasks-per-node=48 |
#SBATCH --ntasks-per-core=1 | #SBATCH --ntasks-per-core=1 | ||
#SBATCH --mail-type=BEGIN | #SBATCH --mail-type=BEGIN | ||
Line 70: | Line 71: | ||
# when srun is used, you need to set: | # when srun is used, you need to set: | ||
- | <srun -l -N2 -n32 a.out > | + | <srun -l -N2 -n96 a.out > |
# or | # or | ||
- | <mpirun -np 32 a.out> | + | <mpirun -np 96 a.out> |
</ | </ | ||
* **-J** | * **-J** | ||
Line 84: | Line 85: | ||
* **--mail-user** sends an email to this address | * **--mail-user** sends an email to this address | ||
- | In order to send the job to specific queues, see [[doku:vsc3_queue|Queue/Partition setup on VSC-3]]. | + | In order to send the job to specific queues, see [[doku:vsc4_queue|Queue |
====Job submission==== | ====Job submission==== | ||
< | < | ||
- | [username@l31 ~]$ sbatch check.slrm | + | [username@l42 ~]$ sbatch check.slrm |
- | [username@l31 ~]$ squeue -u `whoami` | + | [username@l42 ~]$ squeue -u `whoami` |
- | [username@l31 ~]$ scancel | + | [username@l42 ~]$ scancel |
# is obtained from the previous command | # is obtained from the previous command | ||
</ | </ | ||
Line 97: | Line 98: | ||
- | |||
- | ====A word on srun and mpirun:==== | ||
- | Currently (27th March 2015), **srun** only works when the application uses **intel mpi** and is compiled with the **intel compiler**. We will provide compatible versions of MVAPICH2 and OpenMPI in the near future. | ||
- | At the moment, it is recommended to use **mpirun** in case of MVAPICH2 and OpenMPI. | ||
Line 156: | Line 153: | ||
#SBATCH -J chk | #SBATCH -J chk | ||
#SBATCH -N 4 | #SBATCH -N 4 | ||
- | #SBATCH --ntasks-per-node=16 | + | #SBATCH --ntasks-per-node=48 |
#SBATCH --ntasks-per-core=1 | #SBATCH --ntasks-per-core=1 | ||
Line 162: | Line 159: | ||
scontrol show hostnames $SLURM_NODELIST | scontrol show hostnames $SLURM_NODELIST | ||
- | srun -l -N2 -r0 -n32 job1.scrpt & | + | srun -l -N2 -r0 -n96 job1.scrpt & |
- | srun -l -N2 -r2 -n32 job2.scrpt & | + | srun -l -N2 -r2 -n96 job2.scrpt & |
wait | wait | ||
- | srun -l -N2 -r2 -n32 job3.scrpt & | + | srun -l -N2 -r2 -n96 job3.scrpt & |
- | srun -l -N2 -r0 -n32 job4.scrpt & | + | srun -l -N2 -r0 -n96 job4.scrpt & |
wait | wait | ||
Line 275: | Line 272: | ||
#SBATCH -J par # job name | #SBATCH -J par # job name | ||
#SBATCH -N 2 # number of nodes=2 | #SBATCH -N 2 # number of nodes=2 | ||
- | #SBATCH --ntasks-per-node=16 # uses all cpus of one node | + | #SBATCH --ntasks-per-node=48 # uses all cpus of one node |
#SBATCH --ntasks-per-core=1 | #SBATCH --ntasks-per-core=1 | ||
#SBATCH --threads-per-core=1 | #SBATCH --threads-per-core=1 | ||
Line 285: | Line 282: | ||
rm machines_tmp | rm machines_tmp | ||
- | tasks_per_node=16 # change number accordingly | + | tasks_per_node=48 # change number accordingly |
nodes=2 | nodes=2 | ||
for ((line=1; line< | for ((line=1; line< | ||
Line 337: | Line 334: | ||
- continue at 2. for further dependent jobs | - continue at 2. for further dependent jobs | ||
- | ===== Licenses ===== | ||
- | Software, that uses a license server, has to be specified upon job submission. A list of all available licensed software for your user can be shown by using the command: | ||
- | |||
- | < | ||
- | slic | ||
- | </ | ||
- | |||
- | Within the job script add the flags as shown with ' | ||
- | |||
- | < | ||
- | #SBATCH -L matlab@vsc, | ||
- | </ | ||
===== Prolog Error Codes ===== | ===== Prolog Error Codes ===== |