Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:slurm [2023/02/17 16:42] msiegeldoku:slurm [2023/03/14 12:15] – [Node configuration - hyperthreading] goldenberg
Line 17: Line 17:
 ==== Node configuration - hyperthreading ==== ==== Node configuration - hyperthreading ====
  
-The compute nodes of VSC-are configured with the following parameters in SLURM:+The compute nodes of VSC-are configured with the following parameters in SLURM:
 <code> <code>
-CoresPerSocket=8+CoresPerSocket=24
 Sockets=2 Sockets=2
 ThreadsPerCore=2 ThreadsPerCore=2
 </code> </code>
-This reflects the fact that <html> <font color=#cc3300> hyperthreading </font> </html> is activated on all compute nodes and <html> <font color=#cc3300> 32 cores </font> </html> may be utilized on each node. +And the primary nodes of VSC-5 with: 
 +<code> 
 +CoresPerSocket=64 
 +Sockets=2 
 +ThreadsPerCore=2 
 +</code> 
 +This reflects the fact that <html> <font color=#cc3300> hyperthreading </font> </html> is activated on all compute nodes and <html> <font color=#cc3300> 96 cores on VSC4 and 256 cores on VSC5 </font> </html> may be utilized on each node. 
 In the batch script hyperthreading is selected by adding the line In the batch script hyperthreading is selected by adding the line
 <code>  <code> 
Line 30: Line 36:
 which allows for 2 tasks per core. which allows for 2 tasks per core.
  
-Some codes may experience a performance gain from using all 32 virtual cores, e.g., GROMACS seems to profit. But note that using all virtual cores also leads to more communication and may impact on the performance of large MPI jobs.+Some codes may experience a performance gain from using all virtual cores, e.g., GROMACS seems to profit. But note that using all virtual cores also leads to more communication and may impact on the performance of large MPI jobs.
  
-**NOTE on accounting**: the project's core-h are always calculated as ''job_walltime * nnodes * 16'' (16 physical cores per node). SLURM's built in function ''sreport'' yields wrong accounting statistics because (depending on the job script) the multiplier is 32 instead of 16. You may instead use the accounting script introduced in this [[doku:slurm_sacct|section]].+**NOTE on accounting**: the project's core-h are always calculated as ''job_walltime * nnodes * ncpus'' (number of physical cores per node). SLURM's built in function ''sreport'' yields wrong accounting statistics because (depending on the job script) the multiplier is 'number of virtual cores' instead of 'physical cores'. You may instead use the accounting script introduced in this [[doku:slurm_sacct|section]].
  
 ==== Node allocation policy ==== ==== Node allocation policy ====
  • doku/slurm.txt
  • Last modified: 2024/02/07 10:55
  • by katrin