Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
pandoc:introduction-to-vsc:09_special_hardware:accelerators [2020/10/20 09:13] – Pandoc Auto-commit pandoc | pandoc:introduction-to-vsc:09_special_hardware:accelerators [2024/06/06 15:40] – [TOP500 List June 2024] goldenberg | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== GPUs available & how to use it ====== | ||
- | ====== Special hardware (GPUs, binfs) available & how to use it ====== | + | ===== TOP500 List June 2024 ===== |
- | * Article written by Siegfried Höfinger (VSC Team) < | ||
- | |||
- | ====== TOP500 List June 2020 ====== | ||
- | |||
- | |||
- | < | ||
- | < | ||
- | <!--for nations flags see https:// | ||
- | </ | ||
^ Rank^Nation | ^ Rank^Nation | ||
- | | 1.|{{.:jp.png? | + | | 1.|{{.:us.png? |
- | | 2.|{{.: | + | | 2.|{{.: |
- | | 3.|{{.: | + | | 3.|{{.: |
- | | 4.|{{.:cn.png? | + | | 4.|{{.:jp.png? |
- | | 5.|{{.: | + | | 5.| |
- | | 6.|{{.:it.png? | + | | 6.|{{.:ch.png? |
- | | 7.|{{.:us.png? | + | | 7.|{{.:it.png? |
- | | 8.|{{.: | + | | 8.| |
- | | 9.|{{.:it.png? | + | | 9.|{{.:us.png? |
- | | | + | | |
+ | ===== Components on VSC-5 ===== | ||
- | < | + | ^Model ^# |
- | <!--slide 2--> | + | |19x GeForce RTX-2080Ti n375-[001-019] - only in a special project |
- | </HTML> | + | |{{: |
- | ====== Components on VSC-3 ====== | + | |45x2 nVidia A40 n306[6,7,8]-[001-019, |
+ | |{{ : | ||
+ | |62x2 nVidia A100-40GB n307[1-4]-[001-015] | ||
+ | |{{ : | ||
- | ^Model | ||
- | |< | ||
- | |{{.: | ||
- | |< | ||
- | |{{.: | ||
- | |< | ||
- | |{{.: | ||
- | + | ==== Working on GPU nodes Interactively | |
- | < | + | |
- | < | + | |
- | </ | + | |
- | ====== Working on GPU nodes ====== | + | |
**Interactive mode** | **Interactive mode** | ||
< | < | ||
- | 1. VSC-3 > salloc -N 1 -p gpu_gtx1080single | + | 1. VSC-5 > salloc -N 1 -p zen2_0256_a40x2 |
- | 2. VSC-3 > squeue -u $USER | + | 2. VSC-5 > squeue -u $USER |
- | 3. VSC-3 > srun -n 1 hostname | + | 3. VSC-5 > srun -n 1 hostname |
- | 4. VSC-3 > ssh n372-012 (...or whatever else node had been assigned) | + | 4. VSC-5 > ssh n3066-012 (...or whatever else node had been assigned) |
- | 5. VSC-3 > module load cuda/ | + | 5. VSC-5 > module load cuda/ |
cd ~/ | cd ~/ | ||
nvcc ./ | nvcc ./ | ||
Line 63: | Line 48: | ||
./a.out | ./a.out | ||
- | 6. VSC-3 > nvidia-smi | + | 6. VSC-5 > nvidia-smi |
- | 7. VSC-3 > / | + | 7. VSC-5 > / |
</ | </ | ||
- | < | ||
- | < | ||
- | </ | ||
- | ====== Working on GPU nodes cont. ====== | ||
- | **SLURM submission** | + | ===== Working on GPU using SLURM ===== |
+ | |||
+ | **SLURM submission** gpu_test.scrpt | ||
<code bash> | <code bash> | ||
Line 79: | Line 62: | ||
# usage: sbatch ./ | # usage: sbatch ./ | ||
# | # | ||
- | #SBATCH -J gtx1080 | + | #SBATCH -J A40 |
- | #SBATCH -N 1 | + | #SBATCH -N 1 #use -N only if you use both GPUs on the nodes, otherwise leave this line out |
- | #SBATCH --partition | + | #SBATCH --partition |
- | #SBATCH --qos gpu_gtx1080single | + | #SBATCH --qos zen2_0256_a40x2 |
+ | #SBATCH --gres=gpu: | ||
module purge | module purge | ||
Line 90: | Line 74: | ||
/ | / | ||
</ | </ | ||
- | < | ||
- | < | ||
- | < | ||
- | </ | ||
- | ====== Working on binf nodes ====== | ||
- | **Interactive mode** | + | ===== Real-World Example, AMBER-16 ===== |
- | + | ||
- | < | + | |
- | 1. VSC-3 > salloc -N 1 -p binf --qos normal_binf -C binf -L intel@vsc | + | |
- | (... add | + | |
- | + | ||
- | 2. VSC-3 > squeue -u $USER | + | |
- | + | ||
- | 3. VSC-3 > srun -n 4 hostname | + | |
- | + | ||
- | 4. VSC-3 > ssh binf-11 | + | |
- | + | ||
- | 5. VSC-3 > module purge | + | |
- | + | ||
- | 6. VSC-3 > module load intel/17 | + | |
- | cd examples/ | + | |
- | icc -xHost -qopenmp sample.c | + | |
- | export OMP_NUM_THREADS=8 | + | |
- | ./a.out | + | |
- | </ | + | |
- | < | + | |
- | < | + | |
- | </ | + | |
- | ====== Working on binf nodes cont. ====== | + | |
- | + | ||
- | **SLURM submission** [[examples/ | + | |
- | + | ||
- | <code bash> | + | |
- | # | + | |
- | # | + | |
- | # usage: sbatch ./ | + | |
- | # | + | |
- | #SBATCH -J gmxbinfs | + | |
- | #SBATCH -N 2 | + | |
- | #SBATCH --partition binf | + | |
- | #SBATCH --qos normal_binf | + | |
- | #SBATCH -C binf | + | |
- | #SBATCH --ntasks-per-node 24 | + | |
- | #SBATCH --ntasks-per-core 1 | + | |
- | + | ||
- | module purge | + | |
- | module load intel/ | + | |
- | + | ||
- | export I_MPI_PIN=1 | + | |
- | export I_MPI_PIN_PROCESSOR_LIST=0-23 | + | |
- | export I_MPI_FABRICS=shm: | + | |
- | export I_MPI_TMI_PROVIDER=psm2 | + | |
- | export OMP_NUM_THREADS=1 | + | |
- | export MDRUN_ARGS=" | + | |
- | + | ||
- | mpirun -np $SLURM_NTASKS gmx_mpi mdrun ${MDRUN_ARGS} | + | |
- | </ | + | |
- | < | + | |
- | < | + | |
- | </ | + | |
- | ====== Real-World Example, AMBER-16 | + | |
^ | ^ | ||
| {{.: | | {{.: | ||
- | |||
- | |||