Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revisionBoth sides next revision
doku:gromacs [2023/05/17 15:55] – [Installations] msiegeldoku:gromacs [2023/05/17 15:56] msiegel
Line 17: Line 17:
 We provide the following GROMACS installations: We provide the following GROMACS installations:
  
-  * ''gromacs + cuda'': GPU Nodes, use the ''cuda-zen'' [[doku:spack-transistion|spack tree]] on VSC-5. +  * ''gromacs + cuda'': GPU Nodes, use the ''cuda-zen'' [[doku:spack-transition|spack tree]] on VSC-5. 
-  * ''gromacs + mpi'': CPU only use ''zen''/''skylake'' [[doku:spack-transistion|spack trees]] on VSC-5/4.+  * ''gromacs + mpi'': CPU only use ''zen''/''skylake'' [[doku:spack-transition|spack trees]] on VSC-5/4.
  
-Type ''spack find -l gromacs'' or ''module avail gromacs'' on ''cuda-zen''/''zen''/''skylake'' [[doku:spack-transistion|spack trees]] on VSC-5/4. You can list available variants with [[doku:spack]]: ''spack find -l gromacs +cuda'' or ''spack find -l gromacs +mpi''.+Type ''spack find -l gromacs'' or ''module avail gromacs'' on ''cuda-zen''/''zen''/''skylake'' [[doku:spack-transition|spack trees]] on VSC-5/4. You can list available variants with [[doku:spack]]: ''spack find -l gromacs +cuda'' or ''spack find -l gromacs +mpi''.
  
 Because of the low efficiency of GROMACS on many nodes with many GPUs via MPI, we do not provide ''gromacs + cuda + mpi''. So the ''gromacs + cuda'' packages do not have MPI support, so there is no ''gmx_mpi'' binary, only 'gmx'. Because of the low efficiency of GROMACS on many nodes with many GPUs via MPI, we do not provide ''gromacs + cuda + mpi''. So the ''gromacs + cuda'' packages do not have MPI support, so there is no ''gmx_mpi'' binary, only 'gmx'.
  • doku/gromacs.txt
  • Last modified: 2023/11/23 12:27
  • by msiegel