Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revisionLast revisionBoth sides next revision | ||
doku:vsc3_gpu [2017/09/22 12:27] – sh | doku:vsc3_gpu [2018/04/10 09:47] – markus | ||
---|---|---|---|
Line 2: | Line 2: | ||
The following GPU devices are available: | The following GPU devices are available: | ||
- | |||
- | ^ Tesla c2050 (fermi) | ||
- | | Total amount of global memory|2687 MBytes | | ||
- | | (14) Multiprocessors, | ||
- | | GPU Clock rate|1147 MHz | | ||
- | | Maximum number of threads per block|1024 | | ||
- | | Device has ECC support|Enabled | | ||
- | ^ Tesla k20m (kepler) | + | ^ |
| Total amount of global memory|4742 MBytes | | | Total amount of global memory|4742 MBytes | | ||
| (13) Multiprocessors, | | (13) Multiprocessors, | ||
Line 19: | Line 12: | ||
- | ^ Tesla m60 (maxwell) | + | ^ |
| Total amount of global memory|8114 MBytes | | | Total amount of global memory|8114 MBytes | | ||
| (16) Multiprocessors, | | (16) Multiprocessors, | ||
Line 27: | Line 20: | ||
- | ^ Consumer grade GeForce GTX 1080 (pascal) | + | ^ Consumer-grade GeForce GTX 1080 (pascal) |
| Total amount of global memory|8113 MBytes | | | Total amount of global memory|8113 MBytes | | ||
| (20) Multiprocessors, | | (20) Multiprocessors, | ||
Line 35: | Line 28: | ||
- | * One node, n25-009, equipped with two Tesla c2050 (fermi) GPUs. The host system includes two Intel Xeon X5650 @ 2.67GHz CPUs with 6 cores each and 24GB of RAM. | ||
* Two nodes, n25-[005, | * Two nodes, n25-[005, | ||
- | * < | + | * One node, n25-007, with two Tesla m60 (maxwell) GPUs. n25-007 is equipped with 2 Intel Xeon E5-2650 v3 @ 2.30GHz, each with 10 cores and a host memory of 256GB RAM. |
* Ten nodes, n25-[011-020], | * Ten nodes, n25-[011-020], | ||
* Two shared-private nodes, n25-[021-022], | * Two shared-private nodes, n25-[021-022], | ||
- | < | + | [[https://github.com/NVIDIA/ |
- | [[https:// | ||
------- | ------- | ||
==== Slurm integration ==== | ==== Slurm integration ==== | ||
Line 57: | Line 48: | ||
GPU nodes are selected via the **generic resource (--gres=)** and **constraints (-C, | GPU nodes are selected via the **generic resource (--gres=)** and **constraints (-C, | ||
- | * c2050 (fermi) GPU node: < | ||
- | #SBATCH --gres=gpu: | ||
- | </ | ||
* k20m (kepler) GPU nodes: < | * k20m (kepler) GPU nodes: < | ||
#SBATCH --gres=gpu: | #SBATCH --gres=gpu: | ||
Line 83: | Line 71: | ||
-------------------------------------------------- | -------------------------------------------------- | ||
- | ===== Visualization ====== | + | ===== Visualization |
To make use of a gpu node for visualization you need to perform the following steps. | To make use of a gpu node for visualization you need to perform the following steps. |