This version is outdated by a newer approved version.This version (2014/04/09 13:28) is a draft.
Approvals: 0/1
Approvals: 0/1
This is an old revision of the document!
GPAW
Settings Scientific linux 6.5 (VSC 1+2)
Following Revisions of GPAW and ASE were used, specify the revisions in install_gpaw_vsc_sl65.sh
:
- gpaw (?): 11253 ; ase (?): 3547
- gpaw 0.10.0: 11364 ; ase 3.8.1: 3440
- gpaw-setups-0.9.9672
- numpy-1.6.2
- MPI Version: impi-4.1.x
- FFTW from INTEL MKL
Notes:
- The library
libxc
had to be installed. - In customize_sl65_icc_mkl.py
extra_link_args += ['-lxc']
had to be added. - In config.py
mpicompiler = 'mpiicc'
needed to be set.
Settings Scientific linux 6.4 (VSC 1+2)
Following Versions of GPAW and ASE were used:
- gpaw: svn checkout https://svn.fysik.dtu.dk/projects/gpaw/trunk gpaw -r 10428
- ase: svn checkout https://svn.fysik.dtu.dk/projects/ase/trunk ase -r 2801
- gpaw-setups-0.8.7929
- numpy-1.6.2
- MPI Version: impi-4.1.0.024
- FFTW from INTEL MKL
- With Scientific Linux 6.4 the header file '/usr/include/python2.6/modsupport.h' makes some Problems, for compiling gpaw some RedHat specific lines at the bottom of the file had to be commented out
Settings VSC-1 centos 5.7 (old)
Following Versions of GPAW and ASE were used:
- gpaw: svn checkout https://svn.fysik.dtu.dk/projects/gpaw/trunk gpaw -r 9616
- ase: svn checkout https://svn.fysik.dtu.dk/projects/ase/trunk ase -r 2801
- gpaw-setups-0.8.7929
- numpy-1.6.2
- MPI Version: mvapich2_intel_qlc-1.6 ; qlogic MPI did not work
- FFTW from INTEL MKL
Settings VSC-2 scientific linux 6.1 (old)
Following Versions of GPAW and ASE were used:
- gpaw: svn checkout https://svn.fysik.dtu.dk/projects/gpaw/trunk gpaw -r 9616
- ase: svn checkout https://svn.fysik.dtu.dk/projects/ase/trunk ase -r 2801
- gpaw-setups-0.8.7929
- numpy-1.6.2
- MPI Version: intel_mpi_intel64-4.0.3.008
- FFTW from INTEL MKL
Settings gpaw qmmm version
As other versions, but with special config file:
- use other config files for specific cluster / OS
Installation procedure
- For installing download the following files from one of the settings sections above to a directory of your choice
- After downloading edit file 'install_gpaw_*.sh'.
- Execute install_gpaw_*.sh
bash install_gpaw_*.sh all
- Testing of GPAW:
gpaw-python <DIR_TO_GPAW_INST_BIN>/gpaw-test
For certain problems an assert statement in
~/gpaw_inst_latest/lib64/python2.6/site-packages/gpaw/wavefunctions/lcao.py
has to be commented out (approx line 239):
#assert abs(c_n.imag).max() < 1e-14
running gpaw jobs
Job submission on VSC-1 (on VSC-2 also mpich8, mpich4 can be used instead)
#!/bin/sh #$ -N Cl5_4x4x1 #$ -pe mpich 256 #$ -V export OMP_NUM_THREADS=1 NSLOTS_PER_NODE_AVAILABLE=8 NSLOTS_PER_NODE_USED=4 NSLOTS_REDUCED=`echo "$NSLOTS / $NSLOTS_PER_NODE_AVAILABLE * $NSLOTS_PER_NODE_USED" | bc ` echo "starting run with $NSLOTS_REDUCED processes; $NSLOTS_PER_NODE_USED per node" for i in `seq 1 $NSLOTS_PER_NODE_USED` do uniq $TMPDIR/machines >> $TMPDIR/tmp done sort $TMPDIR/tmp > $TMPDIR/myhosts cat $TMPDIR/myhosts mpirun -machinefile $TMPDIR/myhosts -np $NSLOTS_REDUCED gpaw-python static.py --domain=None --band=1 --sl_default=4,4,64
Significant speed up is seen when –sl_default is set. First two parameters for the BLACS grid should be similar in size to get an optimal memory distribution on the nodes.