Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revisionBoth sides next revision
doku:lammps [2014/10/29 13:32] shdoku:lammps [2014/11/04 09:36] sh
Line 200: Line 200:
 </code>  </code> 
      
-NOTE: for the fermi nodes use ''CUDA_ARCH = -arch=sm_20'', and for the kepler nodes use ''CUDA_ARCH = -arch=sm_35''+NOTE: for the fermi nodes use ''CUDA_ARCH = -arch=sm_20'', and for the kepler nodes use ''CUDA_ARCH = -arch=sm_35''; so switching from building for 'fermi' to building for 'kepler' requires re-compilation of package ''gpu'', ie 
 +<code> 
 +      cd .../wherever/it/is/lammps-28Jun14/lib/gpu 
 +      cp Makefile.fermi Makefile.kepler 
 +      vi Makefile.kepler   ( set CUDA_ARCH = -arch=sm_35 ) 
 +      make -f Makefile.kepler clean 
 +      make -f Makefile.kepler 
 +</code>
        
-NOTE2: To get information about packages that will be used+NOTE_2: To get information about included packages
 <code> <code>
 make package-status  make package-status 
Line 214: Line 221:
 === For example /opt/sw/lammps/examples/peptide/ on VSC-1 === === For example /opt/sw/lammps/examples/peptide/ on VSC-1 ===
  
-1. Change into some temporary directory and prepare the submit script for SGE;+1. Change into some temporary directory and prepare the submit script for SGE
 + 
 +vi ./sge_lammps_peptide.scrpt
 <code> <code>
 #$ -N lammps_peptide #$ -N lammps_peptide
Line 228: Line 237:
  
 mpirun -machinefile $TMPDIR/machines -np $NSLOTS /opt/sw/lammps/lmp_vsc1  -in  ./in.peptide mpirun -machinefile $TMPDIR/machines -np $NSLOTS /opt/sw/lammps/lmp_vsc1  -in  ./in.peptide
 +</code>
  
 +2. Submit it and compare the results - log.lammps - to the reference log files in /opt/sw/lammps/examples/peptide. 
 +<code>
 +qsub ./sge_lammps_peptide.scrpt
 </code> </code>
  
  
 +==== GPU fermi ====
 +
 +=== For example /opt/sw/lammps/examples/kokkos/ on VSC-1 ===
 +
 +1. Change into some temporary directory and prepare the submit script for SGE (fermi queue). Two consecutive short test runs will employ 1 GPU and 2 GPUs per node.
 +
 +vi ./sge_lammps_fermi_kokkos.scrpt
 +<code>
 +#$ -N lammps_kokkos
 +#$ -S /bin/bash
 +#$ -cwd
 +#$ -pe smp 12
 +#$ -V
 +#$ -q fermi 
 +#$ -P fermi
 +
 +export LD_LIBRARY_PATH=/opt/sw/openkim-api/1.6.3/lib/libkim-api-v1.so:$LD_LIBRARY_PATH
 +export LD_LIBRARY_PATH=/opt/intel/mkl/lib/intel64:$LD_LIBRARY_PATH
 +export LD_LIBRARY_PATH=/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64:$LD_LIBRARY_PATH
 +export I_MPI_FABRICS=shm:dapl
 +
 +# single GPU
 +/opt/sw/lammps/lmp_fermi_vsc1 -k on t 6  -sf kk -in ./in.kokkos
 +mv ./log.lammps ./log.lammps_fermi.1gpu.kokkos
 +
 +# 2 x GPU
 +mpirun -np 2 /opt/sw/lammps/lmp_fermi_vsc1 -k on t 6 -sf kk -in ./in.kokkos
 +mv ./log.lammps ./log.lammps_fermi.2gpu.kokkos
 +</code>
 +
 +
 +2. Change into the fermi group and submit it to the appropriate queue.
 +<code>
 +sg fermi
 +qsub.py ./sge_lammps_fermi_kokkos.scrpt
 +</code>
  
 +3. Compare the results - ./log.lammps_fermi.[1,2]gpu.kokkos - to the reference log files in /opt/sw/lammps/examples/kokkos. 
  • doku/lammps.txt
  • Last modified: 2014/11/04 09:46
  • by sh