Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
doku:vsc3_gpu [2017/09/22 12:34] – sh | doku:vsc3_gpu [2018/04/10 09:52] (current) – [Slurm integration] markus | ||
---|---|---|---|
Line 2: | Line 2: | ||
The following GPU devices are available: | The following GPU devices are available: | ||
- | |||
- | ^ Tesla c2050 (fermi) | ||
- | | Total amount of global memory|2687 MBytes | | ||
- | | (14) Multiprocessors, | ||
- | | GPU Clock rate|1147 MHz | | ||
- | | Maximum number of threads per block|1024 | | ||
- | | Device has ECC support|Enabled | | ||
- | ^ Tesla k20m (kepler) | + | ^ |
| Total amount of global memory|4742 MBytes | | | Total amount of global memory|4742 MBytes | | ||
| (13) Multiprocessors, | | (13) Multiprocessors, | ||
Line 19: | Line 12: | ||
- | ^ Tesla m60 (maxwell) | + | ^ |
| Total amount of global memory|8114 MBytes | | | Total amount of global memory|8114 MBytes | | ||
| (16) Multiprocessors, | | (16) Multiprocessors, | ||
Line 27: | Line 20: | ||
- | ^ Consumer grade GeForce GTX 1080 (pascal) | + | ^ Consumer-grade GeForce GTX 1080 (pascal) |
| Total amount of global memory|8113 MBytes | | | Total amount of global memory|8113 MBytes | | ||
| (20) Multiprocessors, | | (20) Multiprocessors, | ||
Line 35: | Line 28: | ||
- | * One node, n25-009, equipped with two Tesla c2050 (fermi) GPUs. The host system includes two Intel Xeon X5650 @ 2.67GHz CPUs with 6 cores each and 24GB of RAM. | ||
* Two nodes, n25-[005, | * Two nodes, n25-[005, | ||
- | * < | + | * One node, n25-007, with two Tesla m60 (maxwell) GPUs. n25-007 is equipped with 2 Intel Xeon E5-2650 v3 @ 2.30GHz, each with 10 cores and a host memory of 256GB RAM. |
* Ten nodes, n25-[011-020], | * Ten nodes, n25-[011-020], | ||
* Two shared-private nodes, n25-[021-022], | * Two shared-private nodes, n25-[021-022], | ||
- | < | + | [[https://github.com/NVIDIA/ |
- | [[https:// | ||
------- | ------- | ||
==== Slurm integration ==== | ==== Slurm integration ==== | ||
- | There is one partition called '' | + | There are several partions |
- | < | + | < |
- | PARTITION AVAIL TIMELIMIT | + | gpu_gtx1080single |
- | gpu | + | gpu_gtx1080multi |
- | gpu | + | gpu_k20m |
+ | gpu_m60 | ||
</ | </ | ||
- | and which needs to be specified via: | ||
- | < | ||
- | GPU nodes are selected via the **generic resource (--gres=)** and **constraints (-C, | + | For each partition a identically named QOS is defined. Slurm usage is eg.: |
- | * c2050 (fermi) GPU node: < | + | |
- | #SBATCH --gres=gpu:2 | + | < |
+ | #SBATCH -p gpu_gtx1080single | ||
+ | #SBATCH --qos gpu_gtx1080single | ||
</ | </ | ||
- | * k20m (kepler) GPU nodes: < | ||
- | #SBATCH --gres=gpu: | ||
- | </ | ||
- | * m60 (maxwell) GPU nodes: < | ||
- | #SBATCH --gres=gpu: | ||
- | </ | ||
- | * gtx1080 (pascal) GPU nodes: < | ||
- | #SBATCH --gres=gpu: | ||
- | </ | ||
- | * at idle times of private-shared gtx1080 (pascal) GPU nodes: < | ||
- | #SBATCH -C gtx1080 | ||
- | #SBATCH --gres=gpu: | ||
- | </ | ||
- | |||
- | To use a gpu node for computing purposes the quality of service (QoS) '' | ||
- | < | ||
- | For visualization the QoS '' | ||
- | < | ||
- | When a job is submitted within the '' | ||
-------------------------------------------------- | -------------------------------------------------- | ||
- | ===== Visualization ====== | + | ===== Visualization |
To make use of a gpu node for visualization you need to perform the following steps. | To make use of a gpu node for visualization you need to perform the following steps. |