Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
doku:gpaw [2014/08/27 11:21] jzdoku:gpaw [2014/08/27 11:27] (current) jz
Line 1: Line 1:
-7+===== GPAW ===== 
 + 
 +=== Settings Scientific linux 6.5 (VSC 1+2) === 
 +Following Revisions of GPAW and ASE were used, specify the revisions in ''install_gpaw_vsc_sl65.sh'': 
 + 
 +  * gpaw (?): 11253 ; ase (?): 3547 
 +  * gpaw 0.10.0: 11364 ; ase 3.8.1: 3440  
 +  * gpaw-setups-0.9.9672 
 +  * numpy-1.6.2 
 + 
 +  * MPI Version: impi-4.1.x 
 +  * FFTW from INTEL MKL 
 +  * files: {{:doku:gpaw:sl65:site.cfg.numpy.sl65_icc_mkl.txt}}, {{:doku:gpaw:sl65:install_gpaw_vsc_sl65.sh}},  {{:doku:gpaw:sl65:customize_sl65_icc_mkl.py}}, {{:doku:gpaw:sl65:config.py}}  
 + 
 +Notes: 
 +   * The library ''libxc'' had to be installed. 
 +   * In customize_sl65_icc_mkl.py ''extra_link_args += ['-lxc']'' had to be added. 
 +   * In config.py ''mpicompiler = 'mpiicc' '' needed to be set. 
 + 
 +=== Settings Scientific linux 6.4 (VSC 1+2) === 
 +Following Versions of GPAW and ASE were used: 
 +  * gpaw: svn checkout https://svn.fysik.dtu.dk/projects/gpaw/trunk gpaw -r 10428 
 +  * ase: svn checkout https://svn.fysik.dtu.dk/projects/ase/trunk ase -r 2801 
 +  * gpaw-setups-0.8.7929 
 +  * numpy-1.6.2 
 + 
 +  * MPI Version: impi-4.1.0.024  
 +  * FFTW from INTEL MKL 
 +  * files: {{:doku:gpaw:site.cfg.numpy.sl64_icc_mkl.txt}}, {{:doku:gpaw:install_gpaw_vsc_sl64.sh}},  {{:doku:gpaw:customize_sl64_icc_mkl.py}}, {{:doku:gpaw:config.py}} 
 +  
 +  * With Scientific Linux 6.4 the header file '/usr/include/python2.6/modsupport.h' makes some Problems, for compiling gpaw some RedHat specific lines at the bottom of the file had to be commented out 
 + 
 +=== Settings VSC-1 centos 5.(old) === 
 +Following Versions of GPAW and ASE were used: 
 +  * gpaw: svn checkout https://svn.fysik.dtu.dk/projects/gpaw/trunk gpaw -r 9616 
 +  * ase: svn checkout https://svn.fysik.dtu.dk/projects/ase/trunk ase -r 2801 
 +  * gpaw-setups-0.8.7929 
 +  * numpy-1.6.2 
 + 
 +  * MPI Version: mvapich2_intel_qlc-1.6 ; qlogic MPI did not work 
 +  * FFTW from INTEL MKL 
 +  * files: {{:doku:gpaw:site.cfg.numpy.vsc1.txt}}, {{:doku:gpaw:install_gpaw_vsc_x.sh}}, {{:doku:gpaw:customize_vsc1_icc.py}}, {{:doku:gpaw:config.py}} 
 +  
 + 
 + 
 + 
 +=== Settings VSC-2 scientific linux 6.1 (old) === 
 +Following Versions of GPAW and ASE were used: 
 +  * gpaw: svn checkout https://svn.fysik.dtu.dk/projects/gpaw/trunk gpaw -r 9616 
 +  * ase: svn checkout https://svn.fysik.dtu.dk/projects/ase/trunk ase -r 2801 
 +  * gpaw-setups-0.8.7929 
 +  * numpy-1.6.2 
 + 
 +  * MPI Version: intel_mpi_intel64-4.0.3.008 
 +  * FFTW from INTEL MKL 
 +  * files: {{:doku:gpaw:site.cfg.numpy.vsc2.txt}}, {{:doku:gpaw:install_gpaw_vsc_x.sh}}, {{:doku:gpaw:customize_vsc2_icc.py}}, {{:doku:gpaw:config.py}} 
 + 
 +=== Settings gpaw qmmm version === 
 +As other versions, but with special config file: 
 +  * {{:doku:gpaw:config.qmmm.py}} 
 +  * use other config files for specific cluster / OS 
 + 
 +=== Installation procedure === 
 +  * For installing download the following files from one of the settings sections above to a directory of your choice 
 +  * After downloading edit file 'install_gpaw_*.sh'
 +  * Execute install_gpaw_*.sh 
 +   <code> 
 +   bash install_gpaw_*.sh all 
 +   </code> 
 +  * Testing of GPAW: 
 +<code> 
 +gpaw-python <DIR_TO_GPAW_INST_BIN>/gpaw-test 
 +</code> 
 + 
 + 
 +For certain problems an assert statement in  
 +<code> 
 +~/gpaw_inst_latest/lib64/python2.6/site-packages/gpaw/wavefunctions/lcao.py 
 +</code> 
 + 
 +has to be commented out (approx line 239): 
 +<code> 
 +#assert abs(c_n.imag).max() < 1e-14 
 +</code> 
 + 
 +=== running gpaw jobs === 
 +== Job submission using all cores on the compute nodes (VSC-1 and VSC-2) == 
 +<code> 
 +#!/bin/sh 
 +#$ -N Cl5_4x4x1 
 +#$ -pe mpich 256 
 +#$ -V 
 + 
 +mpirun -machinefile $TMPDIR/machines -np $NSLOTS gpaw-python static.py --domain=None --band=1 --sl_default=4,4,64 
 +</code> 
 + 
 +== Job submission using half of the cores on the compute nodes on VSC-2. == 
 +If each of your processes require more than 2GB (and less than 4GB) of memory, you can use the parallel environment ''mpich8''. This will allocate only 8 processes on each node while still starting 256 processes but distributed over 32 nodes. It is necessary to use the variable ''$NSLOTS_REDUCED'' instead of ''$NSLOTS'' in that case. 
 +<code> 
 +#!/bin/sh 
 +#$ -N Cl5_4x4x1 
 +#$ -pe mpich8 256 
 +#$ -V 
 + 
 +mpirun -machinefile $TMPDIR/machines -np $NSLOTS_REDUCED gpaw-python static.py --domain=None --band=1 --sl_default=4,4,64 
 +</code> 
 +If even more memory per process is required the environments ''mpich4'', ''mpich2'', and ''mpich1'' are also available, as discussed in [[doku:ompmpi|Hybrid OpenMP/MPI jobs]]. 
 +Alternatively, you can simply start your GPAW job with more processes which will reduce the amount of memory per process. GPAW usually scales well with the number of processes. 
 + 
 +Significant speed up is seen **in our test case** when --sl_default is set. First two parameters for the BLACS grid should be similar in size to get an optimal memory distribution on the nodes. 
  • doku/gpaw.1409138504.txt.gz
  • Last modified: 2014/08/27 11:21
  • by jz