Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
doku:gpaw [2014/08/27 11:21] – jz | doku:gpaw [2014/08/27 11:27] (current) – jz | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | 7 | + | ===== GPAW ===== |
+ | |||
+ | === Settings Scientific linux 6.5 (VSC 1+2) === | ||
+ | Following Revisions of GPAW and ASE were used, specify the revisions in '' | ||
+ | |||
+ | * gpaw (?): 11253 ; ase (?): 3547 | ||
+ | * gpaw 0.10.0: 11364 ; ase 3.8.1: 3440 | ||
+ | * gpaw-setups-0.9.9672 | ||
+ | * numpy-1.6.2 | ||
+ | |||
+ | * MPI Version: impi-4.1.x | ||
+ | * FFTW from INTEL MKL | ||
+ | * files: {{: | ||
+ | |||
+ | Notes: | ||
+ | * The library '' | ||
+ | * In customize_sl65_icc_mkl.py '' | ||
+ | * In config.py '' | ||
+ | |||
+ | === Settings Scientific linux 6.4 (VSC 1+2) === | ||
+ | Following Versions of GPAW and ASE were used: | ||
+ | * gpaw: svn checkout https:// | ||
+ | * ase: svn checkout https:// | ||
+ | * gpaw-setups-0.8.7929 | ||
+ | * numpy-1.6.2 | ||
+ | |||
+ | * MPI Version: impi-4.1.0.024 | ||
+ | * FFTW from INTEL MKL | ||
+ | * files: {{: | ||
+ | |||
+ | * With Scientific Linux 6.4 the header file '/ | ||
+ | |||
+ | === Settings VSC-1 centos 5.7 (old) === | ||
+ | Following Versions of GPAW and ASE were used: | ||
+ | * gpaw: svn checkout https:// | ||
+ | * ase: svn checkout https:// | ||
+ | * gpaw-setups-0.8.7929 | ||
+ | * numpy-1.6.2 | ||
+ | |||
+ | * MPI Version: mvapich2_intel_qlc-1.6 ; qlogic MPI did not work | ||
+ | * FFTW from INTEL MKL | ||
+ | * files: {{: | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | === Settings VSC-2 scientific linux 6.1 (old) === | ||
+ | Following Versions of GPAW and ASE were used: | ||
+ | * gpaw: svn checkout https:// | ||
+ | * ase: svn checkout https:// | ||
+ | * gpaw-setups-0.8.7929 | ||
+ | * numpy-1.6.2 | ||
+ | |||
+ | * MPI Version: intel_mpi_intel64-4.0.3.008 | ||
+ | * FFTW from INTEL MKL | ||
+ | * files: {{: | ||
+ | |||
+ | === Settings gpaw qmmm version === | ||
+ | As other versions, but with special config file: | ||
+ | * {{: | ||
+ | * use other config files for specific cluster / OS | ||
+ | |||
+ | === Installation procedure === | ||
+ | * For installing download the following files from one of the settings sections above to a directory of your choice | ||
+ | * After downloading edit file ' | ||
+ | * Execute install_gpaw_*.sh | ||
+ | < | ||
+ | bash install_gpaw_*.sh all | ||
+ | </ | ||
+ | * Testing of GPAW: | ||
+ | < | ||
+ | gpaw-python < | ||
+ | </ | ||
+ | |||
+ | |||
+ | For certain problems an assert statement in | ||
+ | < | ||
+ | ~/ | ||
+ | </ | ||
+ | |||
+ | has to be commented out (approx line 239): | ||
+ | < | ||
+ | #assert abs(c_n.imag).max() < 1e-14 | ||
+ | </ | ||
+ | |||
+ | === running gpaw jobs === | ||
+ | == Job submission using all cores on the compute nodes (VSC-1 and VSC-2) == | ||
+ | < | ||
+ | #!/bin/sh | ||
+ | #$ -N Cl5_4x4x1 | ||
+ | #$ -pe mpich 256 | ||
+ | #$ -V | ||
+ | |||
+ | mpirun -machinefile $TMPDIR/ | ||
+ | </ | ||
+ | |||
+ | == Job submission using half of the cores on the compute nodes on VSC-2. == | ||
+ | If each of your processes require more than 2GB (and less than 4GB) of memory, you can use the parallel environment '' | ||
+ | < | ||
+ | #!/bin/sh | ||
+ | #$ -N Cl5_4x4x1 | ||
+ | #$ -pe mpich8 256 | ||
+ | #$ -V | ||
+ | |||
+ | mpirun -machinefile $TMPDIR/ | ||
+ | </ | ||
+ | If even more memory per process is required the environments '' | ||
+ | Alternatively, | ||
+ | |||
+ | Significant speed up is seen **in our test case** when --sl_default is set. First two parameters for the BLACS grid should be similar in size to get an optimal memory distribution on the nodes. |