Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revisionLast revisionBoth sides next revision | ||
doku:vasp-benchmarks [2014/10/21 13:28] – ir | doku:vasp-benchmarks [2022/06/23 13:20] – reordered msiegel | ||
---|---|---|---|
Line 5: | Line 5: | ||
A 5x4 supercell with 150 Palladium atoms and 24 Oxygen atoms, i.e., 3 pure Palladium layers with one mixed Palladium/ | A 5x4 supercell with 150 Palladium atoms and 24 Oxygen atoms, i.e., 3 pure Palladium layers with one mixed Palladium/ | ||
+ | |||
+ | ===== VSC 3 ===== | ||
+ | |||
+ | The code was compiled with the intel compiler, intel MKL (BLACS, SCALAPACK), and intel mpi (5.0.0.028). | ||
+ | |||
+ | Figure 6 shows that the running time of this benchmark substantially decreases with the number of MPI-processes. The decrease in running time with the number of threads is less dominant. However, other applications may exhibit a different behavior. | ||
+ | |||
+ | {{: | ||
+ | {{: | ||
+ | |||
+ | **Figure 6:** real running time on eight and 32 nodes depending on the number of MPI processes and the number of threads. | ||
+ | {{ : | ||
+ | **Figure 7:** real running time for 16 tasks (MPI processes) per node and one thread depending on the number of nodes. | ||
+ | |||
+ | ===== VSC 2 ===== | ||
+ | |||
+ | The code was compiled with the intel compiler, intel MKL (BLACS, SCALAPACK), and intel mpi (4.0.1.007). | ||
+ | |||
+ | The figure shows the dependency of the computing time on the selected mpich environment and the number of processes. **mpich1** means 1 process per node, **mpich2** 2 processes and so forth. For 16 processes, the environment variable is not following the same name convention, it is called **mpich**. | ||
+ | |||
+ | A trend can be noticed that the computing time decreases with decreasing number of processes per node (mpich) and increasing number of processes, also corresponding to an increase of the number of slots. However, when the number of processes further increases, the computing time rises. The reason is the not optimal scaling of BLACS and SCALAPACK. Improvement can be achieved by utilizing ELPA instead. | ||
+ | |||
+ | {{ : | ||
+ | |||
+ | **Figure 5:** real running time depending on the selected mpich environment and the number of processes. Here the number of threads is always 1. | ||
+ | |||
===== VSC 1 ===== | ===== VSC 1 ===== | ||
Line 12: | Line 38: | ||
A lower mpich (number of processes per node) further reduces the running time. Considering also the number of threads, | A lower mpich (number of processes per node) further reduces the running time. Considering also the number of threads, | ||
- | Figure 3 relates | + | Figure 3 and 4 relate |
+ | Due to overhead like communications, transfer etc., real scaling will never follow this behavior. | ||
Therefore the core hours increase with the number of slots. | Therefore the core hours increase with the number of slots. | ||
- | //Mind that the specific behavior of a code can be very different from this example | + | It can be concluded that the real time can be reduced with increasing system size, however, at the price of unproportionally increasing core hours. This means that an increase in system size should be done within a region with essential gain in real time. Reductions in only some minutes at the cost of several core hours may be inefficient. A further benefit of not requesting too many slots is that the waiting time in the queue is reduced. |
+ | |||
+ | //Mind that the specific behavior of a code can be very different from this example. | ||
+ | Even the behavior of the same code may strongly depend on the input. | ||
+ | **Thus, it is recommended to test your specific application in order to find the optimum combination of 'mpich environment-number of processes-number of threads' | ||
{{ : | {{ : | ||
Line 44: | Line 75: | ||
{{ : | {{ : | ||
** Figure 4:** real running time (dashed lines [min]) and core hours (solid lines [h]) for the best result (mpich/1 thread). | ** Figure 4:** real running time (dashed lines [min]) and core hours (solid lines [h]) for the best result (mpich/1 thread). | ||
- | ===== VSC 2 ===== | ||
- | |||
- | The code was compiled with the intel compiler, intel MKL (BLACS, SCALAPACK), and intel mpi (4.0.1.007). | ||
- | |||
- | The figure shows the dependency of the computing time on the selected mpich environment and the number of processes. **mpich1** means 1 process per node, **mpich2** 2 processes and so forth. For 16 processes, the environment variable is not following the same name convention, it is called **mpich**. | ||
- | |||
- | A trend can be noticed that the computing time decreases with decreasing number of processes per node and increasing number of processes, also corresponding to an increase of the number of slots. However, when the number of processes further increases, the computing time rises. The reason is the not optimal scaling of BLACS and SCALAPACK. Improvement can be achieved by utilizing ELPA instead. | ||
- | |||
- | {{ : | ||
- | |||
- | **Figure 5:** real running time depending on the selected mpich environment and the number of processes. Here the number of threads is always 1. |