Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
doku:vsc3_queue [2021/09/11 08:22]
goldenberg [Quality of service (QOS)]
doku:vsc3_queue [2022/11/04 11:24] (current)
85.25.185.103 ↷ Links adapted because of a move operation
Line 8: Line 8:
  
 ^partition name^ description^ ^partition name^ description^
-| | |  
 |vsc3plus_0064 | default, nodes with 64 GB of memory | |vsc3plus_0064 | default, nodes with 64 GB of memory |
 |vsc3plus_0256 | nodes with 256 GB of memory| |vsc3plus_0256 | nodes with 256 GB of memory|
Line 14: Line 13:
 |binf | Bioinformatics nodes | |binf | Bioinformatics nodes |
 |jupyter| reserved for the JupyterHub | |jupyter| reserved for the JupyterHub |
 +
 +For the specific GPU partitions, see [[doku:vsc-gpuqos|GPUs on VSC-3]]
 +
 +The partitions of the oil-cooled nodes (normal_0064, normal_0128, normal_0256), the Xeon Phi nodes (knl) and the ARM nodes (arm) have been decommissioned and are no longer available.
 ===== Quality of service (QOS) ===== ===== Quality of service (QOS) =====
  
Line 25: Line 28:
 The default QOS and all QOSs usable are also shown right after login. The default QOS and all QOSs usable are also shown right after login.
  
-Generally, it can be distinguished in QOS defined on the ordinary compute nodes (vsc4plus_0064/vsc3plus_0256), on GPUs, on Bioinformatics nodes and on private nodes. Furthermore, there is a distinction whether a project still has available computing time or if the computing time has already been consumed. In the latter case, jobs of this project are running with low job priority and reduced maximum run time limit in the ➠ idle queue. +Generally, it can be distinguished in QOS defined on the ordinary compute nodes (vsc4plus_0064/vsc3plus_0256), on GPUs, on Bioinformatics nodes and on private nodes. Furthermore, there is a distinction whether a project still has available computing time or if the computing time has already been consumed. In the latter case, jobs of this project are running with low job priority and reduced maximum run time limit in the <html><font color=#cc3300>&#x27A0; idle queue</font></html>
  
- +The <html><font color=#cc3300>&#x27A0; devel queue</font></html> (devel_0064) gives fast feed-back to the user if her or his job is running. It is possible to connect to the node where the actual job is running and to directly [[doku:monitoring|monitor]] the job, e.g., for the purpose of checking if the threads/processes are doing what is expected. This might be recommended before sending the job to one of the 'computing' queues
- +
-Have a look on the <html><font color=green>&#x27A0; </font></html> [[doku:vsc3qos|available QOSs on VSC-3]].+
 ==== Run time limits ==== ==== Run time limits ====
  
Line 35: Line 36:
 ^ The QOS's hard run time limits ^   | ^ The QOS's hard run time limits ^   |
 | | |  | | | 
-normal_0064 normal_0128 / normal_0256  | 72h (3 days) |            +vsc3plus_0064 vsc3plus_0256            | 72h (3 days) |            
-| idle_0064 / idle_0128 / idle_0256        | 24h (1 day)  | +| idle_0064 / idle_0256                    | 24h (1 day)  | 
-| private queues   p....._0...             | 240h (10 days) | +| GPU queues gpu_.....                     | 72h (3 days) | 
-devel queue (up to 10 nodes available)   | 10min        |+| normal_binf                              | 24h (1 day)  | 
 +| private queues   p....._0...             up to 240h (10 days) | 
 +devel_0064 (up to nodes available)     | 10min        |
 The QOS's run time limits can also be requested via the command The QOS's run time limits can also be requested via the command
 <code>sacctmgr show qos  format=name%20s,priority,grpnodes,maxwall,description%40s</code> <code>sacctmgr show qos  format=name%20s,priority,grpnodes,maxwall,description%40s</code>
Line 65: Line 68:
 ^QOS name ^ gives access to partition ^description^ ^QOS name ^ gives access to partition ^description^
 | | |  | | | 
-|normal_0064 mem_0064| default | +|vsc3plus_0064 vsc3plus_0064| default | 
-|normal_0128 mem_0128| | +|vsc3plus_0256 vsc3plus_0256| | 
-|normal_0256 mem_0256| | +|gpu_....  gpu_xxxx|GPU QOS and GPU partition of the same name
-|devel_0128  mem_0128|for development purposes only 10 min & 10 nodes|+|normal_binf | binf| | 
 +|devel_0064 nodes on vsc3plus_0064 |
  
 == examples == == examples ==
 <code> <code>
-#SBATCH --partition=mem_0128 +#SBATCH --partition=vsc3plus_0064 
-#SBATCH --qos=normal_0128      +#SBATCH --qos=vsc3plus_0064   
 #SBATCH --account=p7xxxx    #SBATCH --account=p7xxxx   
 </code> </code>
 <code> <code>
-#SBATCH --partition=mem_0128 +#SBATCH --partition=gpu_a40dual 
-#SBATCH --qos=devel_0128+#SBATCH --qos=gpu_a40dual
 #SBATCH --account=p7xxxx #SBATCH --account=p7xxxx
 </code> </code>
   * Note that partition, qos, and account have to fit together.    * Note that partition, qos, and account have to fit together. 
   * If the account is not given, the default account (''sacctmgr show user `id -u` withassoc format=defaultaccount'') will be used.   * If the account is not given, the default account (''sacctmgr show user `id -u` withassoc format=defaultaccount'') will be used.
-  * If partition and qos are not given, default values are mem_0064 and normal_0064.+  * If partition and qos are not given, default values are vsc3plus_0064 for both.
  
 === private nodes projects === === private nodes projects ===
Line 90: Line 94:
  
 <code> <code>
-#SBATCH --partition=mem_xxxx+#SBATCH --partition=vsc3plus_xxxx
 #SBATCH --qos=p7xxx_xxxx #SBATCH --qos=p7xxx_xxxx
 #SBATCH --account=p7xxxx  #SBATCH --account=p7xxxx 
  • doku/vsc3_queue.1631348563.txt.gz
  • Last modified: 2021/09/11 08:22
  • by goldenberg