Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
doku:slurm [2023/02/17 16:42] – msiegel | doku:slurm [2023/03/14 12:15] – [Node configuration - hyperthreading] goldenberg | ||
---|---|---|---|
Line 17: | Line 17: | ||
==== Node configuration - hyperthreading ==== | ==== Node configuration - hyperthreading ==== | ||
- | The compute nodes of VSC-3 are configured with the following parameters in SLURM: | + | The compute nodes of VSC-4 are configured with the following parameters in SLURM: |
< | < | ||
- | CoresPerSocket=8 | + | CoresPerSocket=24 |
Sockets=2 | Sockets=2 | ||
ThreadsPerCore=2 | ThreadsPerCore=2 | ||
</ | </ | ||
- | This reflects the fact that < | + | And the primary nodes of VSC-5 with: |
+ | < | ||
+ | CoresPerSocket=64 | ||
+ | Sockets=2 | ||
+ | ThreadsPerCore=2 | ||
+ | </ | ||
+ | This reflects the fact that < | ||
In the batch script hyperthreading is selected by adding the line | In the batch script hyperthreading is selected by adding the line | ||
< | < | ||
Line 30: | Line 36: | ||
which allows for 2 tasks per core. | which allows for 2 tasks per core. | ||
- | Some codes may experience a performance gain from using all 32 virtual cores, e.g., GROMACS seems to profit. But note that using all virtual cores also leads to more communication and may impact on the performance of large MPI jobs. | + | Some codes may experience a performance gain from using all virtual cores, e.g., GROMACS seems to profit. But note that using all virtual cores also leads to more communication and may impact on the performance of large MPI jobs. |
- | **NOTE on accounting**: | + | **NOTE on accounting**: |
==== Node allocation policy ==== | ==== Node allocation policy ==== |