This version is outdated by a newer approved version.This version (2022/02/01 20:26) is a draft.
Approvals: 0/1
Approvals: 0/1
This is an old revision of the document!
Special hardware (GPUs, binfs) available & how to use it
- Article written by Siegfried Höfinger (VSC Team) <html><br></html>(last update 2020-10-04 by sh).
TOP500 List June 2020
<HTML> <!–slide 1–> <!–for nations flags see https://www.free-country-flags.com–> </HTML>
<HTML> <!–slide 2–> </HTML>
Components on VSC-3+
<HTML> <!–slide 3–> </HTML>
Working on GPU nodes
Interactive mode
1. VSC-3 > salloc -N 1 -p gpu_gtx1080single --qos gpu_gtx1080single 2. VSC-3 > squeue -u $USER 3. VSC-3 > srun -n 1 hostname (...while still on the login node !) 4. VSC-3 > ssh n372-012 (...or whatever else node had been assigned) 5. VSC-3 > module load cuda/9.1.85 cd ~/examples/09_special_hardware/gpu_gtx1080/matrixMul nvcc ./matrixMul.cu ./a.out cd ~/examples/09_special_hardware/gpu_gtx1080/matrixMulCUBLAS nvcc matrixMulCUBLAS.cu -lcublas ./a.out 6. VSC-3 > nvidia-smi 7. VSC-3 > /opt/sw/x86_64/glibc-2.17/ivybridge-ep/cuda/9.1.85/NVIDIA_CUDA-9.1_Samples/1_Utilities/deviceQuery/deviceQuery
<HTML> <!–slide 4–> </HTML>
Working on GPU nodes cont.
SLURM submission gpu_test.scrpt
#!/bin/bash # # usage: sbatch ./gpu_test.scrpt # #SBATCH -J gtx1080 #SBATCH -N 1 #SBATCH --partition gpu_gtx1080single #SBATCH --qos gpu_gtx1080single module purge module load cuda/9.1.85 nvidia-smi /opt/sw/x86_64/glibc-2.17/ivybridge-ep/cuda/9.1.85/NVIDIA_CUDA-9.1_Samples/1_Utilities/deviceQuery/deviceQuery
<html><font color=“navy”></html>Exercise/Example/Problem:<html></font></html> <html><br/></html> Using interactive mode or batch submission, figure out whether we have ECC enabled on GPUs of type gtx1080 ?
<HTML> <!–slide 5–> </HTML>