Table of Contents

GPU Nodes on VSC-1

On VSC-1 currently two different kind of GPU Nodes are available:

Access to nodes is managed via a dedicated queue called 'fermi' / 'kepler'. If you would like to have access to this queue please contact system administration and specify the user who wants to access the GPU nodes. The username needs to be one of an already existing VSC user. After a user has been added, the nodes can be access interactively via:

#switch user group first, then login
sg fermi             
qrsh -q fermi 

or

#switch user group first, then login
sg fermi             
qrsh -q kepler

Alternatively you can submit a job script which has in its header the parameters:

#$ -q fermi
#$ -P fermi

or

#$ -q kepler
#$ -P fermi

The usage of the additional -pe smp 6 parameter is currently not mandatory, but strongly encouraged. In order to avoid more than two jobs requesting GPU's on one node (each node has two GPU units) half of the 12 CPU cores of the node should be requested by the job.

Job submission has to be done using the 'qsub.py' wrapper script:

qsub.py job.sh

The runtime consumed on the GPU nodes will not be deducted from your VSC account.

GPU CUDA Documentation

Available software tools are the nvcc compiler, cuda libraries, and cula tools. The following variables are defined (automatically set in your environment) in order to use these:

CULA_ROOT="/opt/sw/cula"
CULA_INC_PATH="$CULA_ROOT/include"
CULA_BIN_PATH_64="$CULA_ROOT/bin64"
CULA_LIB_PATH_64="$CULA_ROOT/lib64"
PATH=/opt/sw/cuda/bin:$PATH
LD_LIBRARY_PATH=/opt/sw/cuda/lib64:/opt/sw/cuda/computeprof/bin:$CULA_LIB_PATH_64:$LD_LIBRARY_PATH

Example programs can be found in '/opt/sw/cula/examples/'.
Extensive documentation here: '/opt/sw/cuda/doc/'.

NVIDIA System Management Interface program - nvidia-smi

can be used to monitor the GPU usage, e.g.

[@r18n45 ~]# nvidia-smi -i 1 -q -d UTILIZATION,MEMORY -l

==============NVSMI LOG==============

Timestamp                       : Tue Jun 28 14:09:58 2011

Driver Version                  : 270.40

Attached GPUs                   : 2

GPU 0:84:0
    Memory Usage
        Total                   : 2687 Mb
        Used                    : 101 Mb
        Free                    : 2586 Mb
    Utilization
        Gpu                     : 99 %
        Memory                  : 0 %

for more options see man nvidia-smi

Code examples and exercises

Examples of C and fortran code can be found here:

/opt/sw/gpu-doc/exercises

There are exercises with templates and also possible solutions in the subdirectories C and fortran and the respective solutions directory.

CUDA C References

CUDA C best practices
CUDA C programming guide
CUDA GDB user manual
CUDA toolkit reference

CUDA Libraries References

CUBLAS library
CUFFT library
CURAN library
CUSPARSE library

CUDA Fermi References

Fermi compatibility guide
Fermi tuning guide
NVIDIA fermi compute archtiecture - whitepaper

CUDA Fortran Refernce

PGI CUDA fortran user guide

CUDA PTX Reference

PTX documentation

Release Notes

compute_visual_profiler_release_notes_linux.txt
cuda_profiler_3.0.txt