This version (2022/06/20 09:01) was approved by msiegel.

Settings Scientific linux 6.5 (VSC 1+2)

Following Revisions of GPAW and ASE were used, specify the revisions in install_gpaw_vsc_sl65.sh:

  • gpaw (?): 11253 ; ase (?): 3547
  • gpaw 0.10.0: 11364 ; ase 3.8.1: 3440
  • gpaw-setups-0.9.9672
  • numpy-1.6.2

Notes:

  • The library libxc had to be installed.
  • In customize_sl65_icc_mkl.py extra_link_args += ['-lxc'] had to be added.
  • In config.py mpicompiler = 'mpiicc' needed to be set.

Settings Scientific linux 6.4 (VSC 1+2)

Following Versions of GPAW and ASE were used:

  • With Scientific Linux 6.4 the header file '/usr/include/python2.6/modsupport.h' makes some Problems, for compiling gpaw some RedHat specific lines at the bottom of the file had to be commented out

Settings VSC-1 centos 5.7 (old)

Following Versions of GPAW and ASE were used:

Settings VSC-2 scientific linux 6.1 (old)

Following Versions of GPAW and ASE were used:

Settings gpaw qmmm version

As other versions, but with special config file:

Installation procedure

  • For installing download the following files from one of the settings sections above to a directory of your choice
  • After downloading edit file 'install_gpaw_*.sh'.
  • Execute install_gpaw_*.sh
   bash install_gpaw_*.sh all
   
  • Testing of GPAW:
gpaw-python <DIR_TO_GPAW_INST_BIN>/gpaw-test

For certain problems an assert statement in

~/gpaw_inst_latest/lib64/python2.6/site-packages/gpaw/wavefunctions/lcao.py

has to be commented out (approx line 239):

#assert abs(c_n.imag).max() < 1e-14

running gpaw jobs

Job submission using all cores on the compute nodes (VSC-1 and VSC-2)
#!/bin/sh
#$ -N Cl5_4x4x1
#$ -pe mpich 256
#$ -V

mpirun -machinefile $TMPDIR/machines -np $NSLOTS gpaw-python static.py --domain=None --band=1 --sl_default=4,4,64
Job submission using half of the cores on the compute nodes on VSC-2.

If each of your processes require more than 2GB (and less than 4GB) of memory, you can use the parallel environment mpich8. This will allocate only 8 processes on each node while still starting 256 processes but distributed over 32 nodes. It is necessary to use the variable $NSLOTS_REDUCED instead of $NSLOTS in that case.

#!/bin/sh
#$ -N Cl5_4x4x1
#$ -pe mpich8 256
#$ -V

mpirun -machinefile $TMPDIR/machines -np $NSLOTS_REDUCED gpaw-python static.py --domain=None --band=1 --sl_default=4,4,64

If even more memory per process is required the environments mpich4, mpich2, and mpich1 are also available, as discussed in Hybrid OpenMP/MPI jobs. Alternatively, you can simply start your GPAW job with more processes which will reduce the amount of memory per process. GPAW usually scales well with the number of processes.

Significant speed up is seen in our test case when –sl_default is set. First two parameters for the BLACS grid should be similar in size to get an optimal memory distribution on the nodes.

  • doku/gpaw.txt
  • Last modified: 2014/08/27 11:27
  • by jz