Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:vsc4_queue [2021/09/14 12:54] – [sbatch parameters] goldenbergdoku:vsc4_queue [2023/08/04 09:44] – [Partitions] goldenberg
Line 1: Line 1:
-====== Queue | Partition setup on VSC-4 ====== +====== Queue | Partition | QOS setup on VSC-4 ====== 
-On VSC-4, the type of hardware and the quality of service (QOS) where the jobs run on may be selected. Nodes of the same type of hardware are grouped to partitions, the QOS defines the maximum run time of a job and the number and type of allocable nodes. +On VSC-4, Nodes of the same type of hardware are grouped to partitions. The quality of service (QOS), former calle //Queue// defines the maximum run time of a job and the number and type of allocate-able nodes.
  
-===== Hardware types ===== +For submitting jobs to [[doku:slurm]], three parameters are important:
-There is one type of compute nodewhich comes in three different memory version, 96 GB, 384 GB and 768 GB.+
  
-On VSC-4, the hardware is grouped into so-called <html><font color=#cc3300>&#x27A0; partitions</font></html>:+<code bash> 
 +#SBATCH --account=xxxxxx 
 +#SBATCH --partition=skylake_xxxx 
 +#SBATCH --qos=xxxxx_xxxx 
 +</code> 
 + 
 +Notes: 
 + 
 +  * Core hours will be charged to the specified account. 
 +  * Account, partition, and qos have to fit together 
 +  * If the account is not given, the default account will be used. 
 +  * If partition and QOS are not given, default values are ''skylake_0096'' for both. 
 + 
 + 
 +===== Partitions ===== 
 + 
 +Nodes of the same type of hardware are grouped to partitions. There are three basic types of compute nodes, all with the same CPU, but with different amount of memory: 96 GB, 384 GB and 768 GB. 
 + 
 +These are the partitions on VSC-4: 
 + 
 +^ Partition ^ Nodes ^ Architecture ^ CPU ^ GPU ^ RAM ^ Use ^ 
 +|skylake_0096 | 702 | Intel | 2x Xeon Silver 4108 | No | 96 GB | The default partition | 
 +|skylake_0384 | 78 | Intel | 2x Xeon Silver 4108 | No | 384 GB | High Memory partition | 
 +|skylake_0768 | 12 | Intel | 2x Xeon Silver 4108 | No | 768 GB | Higher Memory partition | 
 + 
 +Type ''sinfo -o %P'' on any node to see all the available partitions. 
 + 
 +For the sake of completeness there are internally used //special// partitions, that can not be selected manually: 
 + 
 +^ Partition ^ Description ^ 
 +| login4 | login nodes, not an actual slurm partition | 
 +| rackws4 | GUI login nodes, not an actual slurm partition | 
 +| _jupyter | reserved for the jupyterhub |
  
-^partition name^ description^ 
-| | |  
-|mem_0096 | default, nodes with 96 GB of memory | 
-|mem_0384 | nodes with 384 GB of memory| 
-|mem_0768 | nodes with 768 GB of memory| 
-|adm_test | reserved for the admin team | 
-|jupyter| reserved for the JupyterHub | 
  
 ===== Quality of service (QOS) ===== ===== Quality of service (QOS) =====
  
-Access to node partitions is granted by the so-called <html><font color=#cc3300>&#x27A0; quality of service (QOS)</font></html>. The QOSs constrain the number of allocatable nodes and limit job wall timeThe naming scheme of the QOSs is: +The QOS defines the maximum run time of a job and the number and type of allocate-able nodes.
-<project_type>_<memoryConfig>+
  
 The QOSs that are assigned to a specific user can be viewed with: The QOSs that are assigned to a specific user can be viewed with:
 +
 <code> <code>
 sacctmgr show user `id -u` withassoc format=user,defaultaccount,account,qos%40s,defaultqos%20s sacctmgr show user `id -u` withassoc format=user,defaultaccount,account,qos%40s,defaultqos%20s
 </code> </code>
-The default QOS and all QOSs usable are also shown right after login. 
  
-Generally, it can be distinguished in QOS defined on the generally available compute nodes (mem_0096/mem_0384/mem_0768) and on private nodes. Furthermore, there is a distinction whether a project still has available computing time or if the computing time has already been consumed. In the latter case, jobs of this project are running with low job priority and reduced maximum run time limit in the <html><font color=#cc3300>&#x27A0; idle queue</font></html>+All QOS usable are also shown right after login.
  
-The <html><font color=#cc3300>&#x27A0; devel queue</font></html> (devel_0096) gives fast feed-back to the user if her or his job is running. It is possible to connect to the node where the actual job is running and to directly [[doku:monitoring|monitor]] the job, e.g., for the purpose of checking if the threads/processes are doing what is expected. This might be recommended before sending the job to one of the 'computing' queues.  
  
-==== Run time limits ====+==== QOS, Partitions and Run time limits ====
  
 +The following QoS are available for all normal (=non private) projects:
  
-^ The QOS's hard run time limits ^   | 
-| | |  
-| mem_0096 / mem_0384 / mem_0768           | 72h (3 days) |            
-| idle_0096 / idle_0384 / idle_0768        | 24h (1 day)  | 
-| private queues   p....._0...             | up to 240h (10 days) | 
-| devel_0096 (up to 5 nodes available)     | 10min        | 
-The QOS's run time limits can also be requested via the command 
-<code>sacctmgr show qos  format=name%20s,priority,grpnodes,maxwall,description%40s</code> 
-SLURM allows for setting a run time limit //below// the default QOS's run time limit. After the specified time is elapsed, the job is killed: 
-<code>#SBATCH --time=<time> </code> 
-Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds". 
  
-==== sbatch parameters ==== +^ QOS name ^ Gives access to Partition ^ Hard run time limits  ^ Description ^ 
-For submitting jobs, three parameters are important:+| skylake_0096 | skylake_0096 | 72h (3 days) | Default | 
 +| skylake_0384 | skylake_0384 | 72h (3 days) | High Memory | 
 +| skylake_0768 | skylake_0768 | 72h (3 days) | Higher Memory |
  
-<code> 
-#SBATCH --partition=mem_xxxx 
-#SBATCH --qos=xxxxx_xxxx 
-#SBATCH --account=xxxxxx 
-</code> 
-The core hours will be charged to the specified account. If not specified, the default account (''sacctmgr show user `id -u` withassoc format=defaultaccount'') will be used. 
  
-=== ordinary projects ===+==== Idle QOS ====
  
-For ordinary projects the QOSs are+If a project runs out of compute time, jobs of this project are now running with low job priority and reduced maximum run time limit in the //idle// QOS.
-^QOS name ^ gives access to partition ^description^ +
-| | |  +
-|mem_0096 | mem_0096 | default | +
-|mem_0384 | mem_0384 | | +
-|mem_0768 | mem_0768 | | +
-|devel_0096 | 5 nodes on mem_0096 |+
  
-== examples == +^ QOS name ^ Gives access to Partition ^ Hard run time limits ^  Description ^ 
-<code> +| idle_0096 | skylake_0096 | 24h (1 day) | Projects out of compute time |  
-#SBATCH --partition=mem_0096 +| idle_0384 | skylake_0384 | 24h (1 day) | Projects out of compute time |  
-#SBATCH --qos=mem_0096    +| idle_0768 | skylake_0768 | 24h (1 day| Projects out of compute time | 
-#SBATCH --account=p7xxxx    +
-</code> +
-  * Note that partition, qos, and account have to fit together.  +
-  * If the account is not given, the default account (''sacctmgr show user `id -u` withassoc format=defaultaccount''will be used. +
-  * If partition and qos are not given, default values are mem_0096 for both.+
  
-=== private nodes projects === 
  
-== example ==+==== Devel QOS ====
  
-<code> +The //devel// QOS gives fast feedback to the user when their job is running. Connect to the node where the actual job is running to directly [[doku:monitoring|monitor]] to check if the threads/processes are doing what you expect. We recommend this before sending the job to one of the ''compute'' queues. 
-#SBATCH --partition=mem_0384 + 
-#SBATCH --qos=p7xxx_xxxx +^ QOS name ^ Gives access to Partition ^ Hard run time limits 
-#SBATCH --account=p7xxxx +| skylake_0096_devel | 5 nodes on skylake_0096 | 10min | 
 + 
 +==== Private Projects ==== 
 + 
 +Private projects come with different QOS; nevertheless partition, QOS, and account have to fit together. 
 + 
 +^ QOS name ^ Gives access to Partition ^ Hard run time limits  ^  
 +| p....._0...  | various | up to 240h (10 days) | private queues | 
 + 
 +For submitting jobs to [[doku:slurm]], three parameters are important: 
 + 
 +<code bash
 +#SBATCH --account=pxxxxx  
 +#SBATCH --partition=skylake_xxxx 
 +#SBATCH --qos=pxxxx_xxxx
 </code>  </code> 
  
  
 +==== Run time ====
 +
 +The QOS's run time limits can also be requested via the command
 +
 +<code>sacctmgr show qos  format=name%20s,priority,grpnodes,maxwall,description%40s</code>
 +
 +If you know how long your job usually runs, you can set the run time limit in SLURM:
 +
 +<code>
 +#SBATCH --time=<time>
 +</code>
 +
 + Of course this has to be //below// the default QOS's run time limit. Your job might start earlier, which is nice; But after the specified time is elapsed, the job is killed!
 +
 +Acceptable time formats include:
 +  * "minutes"
 +  * "minutes:seconds"
 +  * "hours:minutes:seconds"
 +  * "days-hours"
 +  * "days-hours:minutes"
 +  * "days-hours:minutes:seconds".
  • doku/vsc4_queue.txt
  • Last modified: 2024/04/25 13:11
  • by goldenberg