This version (2023/05/17 14:36) is a draft.
Approvals: 0/1

GPUs available & how to use it

<HTML> <!–slide 1–> <!–for nations flags see https://www.free-country-flags.com–> </HTML>

RankNation Machine PerformanceAccelerators
1. Fugaku 416 PFLOPs/s
2. Summit 149 PFLOPs/s<html><font color=“navy”></html>NVIDIA V100<html></font></html>
3. Sierra 95 PFLOPs/s<html><font color=“navy”></html>NVIDIA V100<html></font></html>
4. Sunway TaihuLight 93 PFLOPs/s
5. Tianhe-2A 62 PFLOPs/s
6. HPC5 36 PFLOPs/s<html><font color=“navy”></html>NVIDIA V100<html></font></html>
7. Selene 28 PFLOPs/s<html><font color=“navy”></html>NVIDIA A100<html></font></html>
8. Frontera 24 PFLOPs/s<html><font color=“navy”></html>NVIDIA RTX5000/V100<html></font></html>
9. Marconi-100 22 PFLOPs/s<html><font color=“navy”></html>NVIDIA V100<html></font></html>
10. Piz Daint 21 PFLOPs/s<html><font color=“navy”></html>NVIDIA P100<html></font></html>

<HTML> <!–slide 2–> </HTML>

Model #cores Clock Freq (GHz)Memory (GB)Bandwidth (GB/s)TDP (Watt)FP32/FP64 (GFLOPs/s)
<html><font color=“navy”></html>19x GeForce RTX-2080Ti n375-[001-019]<html></font></html>
rtx-2080.jpg 43521.35 11 616 250 13450/420
<html><font color=“navy”></html>45×2 nVidia A40 n306[6,7,8]-[001-019,001-019,001-007]<html></font></html>
10752 1.305 48 696 300 37400/1169
<html><font color=“navy”></html>60×2 nVidia A100-40GB n307[1-4]-[001-015]<html></font></html>
6912 0.765 40 1555 250 19500/9700

<HTML> <!–slide 3–> </HTML>

Interactive mode

1. VSC-5 >  salloc -N 1 -p zen2_0256_a40x2 --qos  zen2_0256_a40x2 --gres=gpu:2

2. VSC-5 >  squeue -u $USER

3. VSC-5 >  srun -n 1 hostname  (...while still on the login node !)

4. VSC-5 >  ssh n3066-012  (...or whatever else node had been assigned)

5. VSC-5 >  module load cuda/9.1.85    
            cd ~/examples/09_special_hardware/gpu_gtx1080/matrixMul
            nvcc ./matrixMul.cu  
            ./a.out 

            cd ~/examples/09_special_hardware/gpu_gtx1080/matrixMulCUBLAS
            nvcc matrixMulCUBLAS.cu -lcublas
            ./a.out

6. VSC-5 >  nvidia-smi

7. VSC-5 >  /opt/sw/x86_64/glibc-2.17/ivybridge-ep/cuda/9.1.85/NVIDIA_CUDA-9.1_Samples/1_Utilities/deviceQuery/deviceQuery

<HTML> <!–slide 4–> </HTML>

SLURM submission gpu_test.scrpt

#!/bin/bash
#
#  usage: sbatch ./gpu_test.scrpt          
#
#SBATCH -J A40     
#SBATCH -N 1                           #use -N only if you use both GPUs on the nodes, otherwise leave this line out
#SBATCH --partition zen2_0256_a40x2
#SBATCH --qos zen2_0256_a40x2
#SBATCH --gres=gpu:2                   #or --gres=gpu:1 if you only want to use half a node
 
module purge
module load cuda/9.1.85
 
nvidia-smi
/opt/sw/x86_64/glibc-2.17/ivybridge-ep/cuda/9.1.85/NVIDIA_CUDA-9.1_Samples/1_Utilities/deviceQuery/deviceQuery      

<HTML> <!–slide 5–> </HTML>

PerformancePower Efficiency
  • pandoc/introduction-to-vsc/09_special_hardware/accelerators.txt
  • Last modified: 2023/05/17 14:36
  • by msiegel