Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
doku:gromacs [2023/05/17 15:54] – [Installations] msiegel | doku:gromacs [2023/06/14 11:31] – variants msiegel | ||
---|---|---|---|
Line 3: | Line 3: | ||
Our recommendation: | Our recommendation: | ||
- | - Use the most **recent version** of GROMACS that we provide or build your own. | + | - Use the **most recent version** of GROMACS that we provide or build your own. |
- Use the newest Hardware: use **1 GPU** on the partitions '' | - Use the newest Hardware: use **1 GPU** on the partitions '' | ||
- Do some **performance analysis** to decide if a single GPU Node (likely) or multiple CPU Nodes via MPI (unlikely) better suits your problem. | - Do some **performance analysis** to decide if a single GPU Node (likely) or multiple CPU Nodes via MPI (unlikely) better suits your problem. | ||
Line 9: | Line 9: | ||
In most cases it does not make sense to run on multiple GPU nodes with MPI; Whether using one or two GPUs per node. | In most cases it does not make sense to run on multiple GPU nodes with MPI; Whether using one or two GPUs per node. | ||
- | ===== GPU Partition ===== | + | ===== CPU or GPU Partition? ===== |
First you have to decide on which hardware GROMACS should run, we call this a '' | First you have to decide on which hardware GROMACS should run, we call this a '' | ||
Line 15: | Line 15: | ||
===== Installations ===== | ===== Installations ===== | ||
- | We provide the following GROMACS installations: | + | Type '' |
- | * '' | + | Because of the low efficiency of GROMACS |
- | * '' | + | |
+ | We provide the following GROMACS variants: | ||
+ | ==== GPU but no MPI ==== | ||
- | Type '' | + | We recommend GPU Nodes, use the '' |
- | Because of the low efficiency of GROMACS on many nodes with many GPUs via MPI, we do not provide '' | + | **cuda-zen**: |
+ | * Gromacs | ||
+ | Since the '' | ||
+ | |||
+ | ==== MPI but no GPU ==== | ||
+ | |||
+ | For Gromacs on CPU only but with MPI, use '' | ||
+ | |||
+ | **zen**: | ||
+ | * Gromacs +openmpi +blas +lapack ~cuda, all compiled with **GCC** | ||
+ | * Gromacs +openmpi +blas +lapack ~cuda, all compiled with **AOCC** | ||
+ | * | ||
+ | **skylake**: | ||
+ | * Gromacs +**open**mpi +blas +lapack ~cuda, all compiled with **GCC** | ||
+ | * Gromacs +**open**mpi +blas +lapack ~cuda, all compiled with **Intel** | ||
+ | * Gromacs +**intel**mpi +blas +lapack ~cuda, all compiled with **GCC** | ||
+ | * Gromacs +**intel**mpi +blas +lapack ~cuda, all compiled with **Intel** | ||
+ | |||
+ | In some of these packages, there is no '' | ||
===== Batch Script ===== | ===== Batch Script ===== | ||
Line 82: | Line 101: | ||
benchmark various scenarios: | benchmark various scenarios: | ||
- | - a VSC users test case (??? atoms) | + | - a VSC users test case (50, |
- R-143a in hexane (20,248 atoms) with very high output rate | - R-143a in hexane (20,248 atoms) with very high output rate | ||
- a short RNA piece with explicit water (31,889 atoms) | - a short RNA piece with explicit water (31,889 atoms) |