Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:slurm [2023/02/17 16:42] msiegeldoku:slurm [2023/03/14 12:29] – [Node allocation policy] goldenberg
Line 17: Line 17:
 ==== Node configuration - hyperthreading ==== ==== Node configuration - hyperthreading ====
  
-The compute nodes of VSC-are configured with the following parameters in SLURM:+The compute nodes of VSC-are configured with the following parameters in SLURM:
 <code> <code>
-CoresPerSocket=8+CoresPerSocket=24
 Sockets=2 Sockets=2
 ThreadsPerCore=2 ThreadsPerCore=2
 </code> </code>
-This reflects the fact that <html> <font color=#cc3300> hyperthreading </font> </html> is activated on all compute nodes and <html> <font color=#cc3300> 32 cores </font> </html> may be utilized on each node. +And the primary nodes of VSC-5 with: 
 +<code> 
 +CoresPerSocket=64 
 +Sockets=2 
 +ThreadsPerCore=2 
 +</code> 
 +This reflects the fact that <html> <font color=#cc3300> hyperthreading </font> </html> is activated on all compute nodes and <html> <font color=#cc3300> 96 cores on VSC4 and 256 cores on VSC5 </font> </html> may be utilized on each node. 
 In the batch script hyperthreading is selected by adding the line In the batch script hyperthreading is selected by adding the line
 <code>  <code> 
Line 30: Line 36:
 which allows for 2 tasks per core. which allows for 2 tasks per core.
  
-Some codes may experience a performance gain from using all 32 virtual cores, e.g., GROMACS seems to profit. But note that using all virtual cores also leads to more communication and may impact on the performance of large MPI jobs.+Some codes may experience a performance gain from using all virtual cores, e.g., GROMACS seems to profit. But note that using all virtual cores also leads to more communication and may impact on the performance of large MPI jobs.
  
-**NOTE on accounting**: the project's core-h are always calculated as ''job_walltime * nnodes * 16'' (16 physical cores per node). SLURM's built in function ''sreport'' yields wrong accounting statistics because (depending on the job script) the multiplier is 32 instead of 16. You may instead use the accounting script introduced in this [[doku:slurm_sacct|section]].+**NOTE on accounting**: the project's core-h are always calculated as ''job_walltime * nnodes * ncpus'' (number of physical cores per node). SLURM's built in function ''sreport'' yields wrong accounting statistics because (depending on the job script) the multiplier is 'number of virtual cores' instead of 'physical cores'. You may instead use the accounting script introduced in this [[doku:slurm_sacct|section]].
  
 ==== Node allocation policy ==== ==== Node allocation policy ====
-On VSC-(as on VSC-2 <html> <font color=#cc3300> only complete compute Nodes </font> </html>, i.e., integral multiples of 16 cores, can be allocated for user jobs. If you wish to run many single core jobsthere will be a possibility to schedule them in a smart way exploiting all 16 cpus of one node, please see the [[doku:slurm&#scheduler_script_for_many_single_core_jobs|scheduler script for a series of single core jobs]].+On VSC-4 there are a set of nodes which accept jobs that do not require entire nodes (anythong from 1 core to less than a full node). These nodes are set up to accomodate different jobs from different users until they are fullThey are automatically used for such types of jobs. All other nodes are assigned completely to a job.  
 +On VSC-5 that feature is not yet activeso only complete nodes are assigned to jobs.
  
  
  • doku/slurm.txt
  • Last modified: 2024/02/07 10:55
  • by katrin