Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:gromacs [2023/02/17 17:58] – [GPU Partition] msiegeldoku:gromacs [2023/02/21 12:04] – [Batch Script] goldenberg
Line 4: Line 4:
  
   - Use the most recent version of GROMACS that we provide or build your own.   - Use the most recent version of GROMACS that we provide or build your own.
-  - Use the newest Hardware: the partitions ''gpu_a40dual'' or ''gpu_gtx1080single'' on VSC3 have plenty nodes available.+  - Use the newest Hardware: the partitions ''zen2_0256_a40x2'' or ''zen3_0512_a100x2'' on VSC5 have plenty nodes available.
   - Read our article on [[doku:gromacs_multi_gpu|multi GPU]] setup and do some performance analysis.   - Read our article on [[doku:gromacs_multi_gpu|multi GPU]] setup and do some performance analysis.
   - Run on multiple nodes with MPI; each with 1 GPU   - Run on multiple nodes with MPI; each with 1 GPU
Line 17: Line 17:
 available partitions. The partition has to be set in the batch script, available partitions. The partition has to be set in the batch script,
 see the example below. Be aware that each partition has different see the example below. Be aware that each partition has different
-hardware, for example the partition ''gpu_gtx1080single'' on VSC3 has +hardware, so choose the parameters accordingly. GROMACS decides mostly on its own how it wants to
-1 GPU and a single socket à 4 cores, with 2 hyperthreads each core, +
-listed at [[doku:vsc-gpuqos| GPU Partitions on VSC]].  Thus here it +
-makes sense to let GROMACS run on 8 threads (''-ntomp 8''), yet it +
-makes little sense to force more threads than that, as this would lead +
-to oversubscribing. GROMACS decides mostly on its own how it wants to+
 work, so don't be surprised if it ignores settings like environment work, so don't be surprised if it ignores settings like environment
 variables. variables.
Line 39: Line 34:
 #!/bin/bash #!/bin/bash
 #SBATCH --job-name=myname #SBATCH --job-name=myname
-#SBATCH --partition=gpu_gtx1080single+#SBATCH --partition=zen2_0256_a40x2 
 +#SBACTH --qos=zen2_0256_a40x2
 #SBATCH --gres=gpu:1 #SBATCH --gres=gpu:1
-#SBATCH --nodes=1 
  
 unset OMP_NUM_THREADS unset OMP_NUM_THREADS
Line 47: Line 42:
  
 module purge module purge
-module load gcc/7.3 nvidia/1.0 cuda/10.1.168 cmake/3.15.4 openmpi/4.0.python/3.gromacs/2021.2_gtx1080+module load cuda/11.5.0-gcc-11.2.0-ao7cp7w openmpi/4.1.4-gcc-11.2.0-ub765vm python/3.8.12-gcc-11.2.0-rvq5hov gromacs/2022.2-gcc-11.2.0-4x2vwol
  
 gmx_mpi mdrun -s topol.tpr gmx_mpi mdrun -s topol.tpr
  • doku/gromacs.txt
  • Last modified: 2023/11/23 12:27
  • by msiegel