Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:gromacs [2023/05/17 13:05] msiegeldoku:gromacs [2023/05/17 15:56] – [Installations] msiegel
Line 17: Line 17:
 We provide the following GROMACS installations: We provide the following GROMACS installations:
  
-  * ''gromacs + cuda'': For the use with GPU on the ''cuda-zen'' spack tree on VSC-5. +  * ''gromacs + cuda'': GPU Nodes, use the ''cuda-zen'' [[doku:spack-transition|spack tree]] on VSC-5. 
-  * ''gromacs + mpi'': For CPU only use on ''zen''/''skylake'' spack trees on VSC-5/4.+  * ''gromacs + mpi'': CPU only use ''zen''/''skylake'' [[doku:spack-transition|spack trees]] on VSC-5/4.
  
 +Type ''spack find -l gromacs'' or ''module avail gromacs'' on ''cuda-zen''/''zen''/''skylake'' [[doku:spack-transition|spack trees]] on VSC-5/4. You can list available variants with [[doku:spack]]: ''spack find -l gromacs +cuda'' or ''spack find -l gromacs +mpi''.
  
-Type ''spack find -l gromacs'' or ''module avail gromacs'' on ''cuda-zen''/''zen''/''skylake'' spack trees on VSC-5/4. You can list available variants with spack: ''spack find -l gromacs +cuda'' or ''spack find -l gromacs +mpi''+Because of the low efficiency of GROMACS on many nodes with many GPUs via MPI, we do not provide ''gromacs + cuda + mpi''. Since the ''gromacs + cuda'' packages do not have MPI support, there is no ''gmx_mpi'' binary, only ''gmx''.
- +
-Because of the low efficiency of GROMACS on many nodes with many GPUs via MPI, we do not provide ''gromacs + cuda + mpi''+
  
  
  • doku/gromacs.txt
  • Last modified: 2023/11/23 12:27
  • by msiegel