Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
doku:vsc3_gpu [2018/04/10 09:47]
markus
doku:vsc3_gpu [2018/04/10 09:52] (current)
markus [Slurm integration]
Line 37: Line 37:
 ==== Slurm integration ==== ==== Slurm integration ====
  
-There is one partition called ''gpu'' which includes all available gpu nodes+There are several partions which include all identical compute nodes with the same amount of GPUs on each node:
  
-<code>[user@l31 ~]$ sinfo -p gpu +<code> 
-PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST +gpu_gtx1080single   #  4 cpu cores, one gpu per node 
-gpu          up   infinite     10   alloc n25-[011-020] +gpu_gtx1080multi    # 16 cpu cores, eight gpus per node 
-gpu          up   infinite      5   idle n25-[005-007,009-010]+gpu_k20m            # 16 cpu cores, two gpus per node 
 +gpu_m60             # 16 cpu cores, one gpu per node
 </code> </code>
-and which needs to be specified via: 
-<code>#SBATCH --partition=gpu</code> 
  
-GPU nodes are selected via the **generic resource (--gres=)** and **constraints (-C,--constraint=)** options+For each partition a identically named QOS is defined. Slurm usage is eg.
-  * k20m (kepler) GPU nodes: <code>#SBATCH -C k20m + 
-#SBATCH --gres=gpu:2+<code> 
 +#SBATCH -p gpu_gtx1080single 
 +#SBATCH --qos gpu_gtx1080single
 </code> </code>
-  * m60 (maxwell) GPU nodes: <code>#SBATCH -C m60 
-#SBATCH --gres=gpu:2 
-</code> 
-  * gtx1080 (pascal) GPU nodes: <code>#SBATCH -C gtx1080 
-#SBATCH --gres=gpu:1 
-</code> 
-  * at idle times of private-shared gtx1080 (pascal) GPU nodes: <code>#SBATCH --partition=p70971_gpu  
-#SBATCH -C gtx1080 
-#SBATCH --gres=gpu:8 
-</code> 
- 
  
-To use a gpu node for computing purposes the quality of service (QoS) ''gpu_compute'' is available which provides a **maximum runtime of three days**: 
-<code>#SBATCH --qos=gpu_compute</code> 
-For visualization the QoS ''gpu_vis'' has to be used where a gpu node can be occupied for up to **twelve hours** for interactive visualization: 
-<code>#SBATCH --qos=gpu_vis</code> 
-When a job is submitted within the ''gpu_vis'' QoS, an X server is started on the gpu node. 
  
 -------------------------------------------------- --------------------------------------------------
  • doku/vsc3_gpu.txt
  • Last modified: 2018/04/10 09:52
  • by markus