This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Next revisionBoth sides next revision |
doku:gromacs [2023/05/17 15:54] – [Installations] msiegel | doku:gromacs [2023/05/17 15:55] – [Installations] msiegel |
---|
We provide the following GROMACS installations: | We provide the following GROMACS installations: |
| |
* ''gromacs + cuda'': GPU Nodes, use the ''cuda-zen'' spack tree on VSC-5. | * ''gromacs + cuda'': GPU Nodes, use the ''cuda-zen'' [[doku:spack-transistion|spack tree]] on VSC-5. |
* ''gromacs + mpi'': CPU only use ''zen''/''skylake'' spack trees on VSC-5/4. | * ''gromacs + mpi'': CPU only use ''zen''/''skylake'' [[doku:spack-transistion|spack trees]] on VSC-5/4. |
| |
| Type ''spack find -l gromacs'' or ''module avail gromacs'' on ''cuda-zen''/''zen''/''skylake'' [[doku:spack-transistion|spack trees]] on VSC-5/4. You can list available variants with [[doku:spack]]: ''spack find -l gromacs +cuda'' or ''spack find -l gromacs +mpi''. |
| |
Type ''spack find -l gromacs'' or ''module avail gromacs'' on ''cuda-zen''/''zen''/''skylake'' spack trees on VSC-5/4. You can list available variants with spack: ''spack find -l gromacs +cuda'' or ''spack find -l gromacs +mpi''. | |
| |
Because of the low efficiency of GROMACS on many nodes with many GPUs via MPI, we do not provide ''gromacs + cuda + mpi''. So the ''gromacs + cuda'' packages do not have MPI support, so there is no ''gmx_mpi'' binary, only 'gmx'. | Because of the low efficiency of GROMACS on many nodes with many GPUs via MPI, we do not provide ''gromacs + cuda + mpi''. So the ''gromacs + cuda'' packages do not have MPI support, so there is no ''gmx_mpi'' binary, only 'gmx'. |