Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
doku:slurm [2024/07/11 08:52] – [Partition, quality of service and run time] grokytadoku:slurm [2024/07/11 09:05] (current) grokyta
Line 15: Line 15:
 On VSC-4 and VSC-5, spack is used to install and provide modules, see [[doku:spack|SPACK - a package manager for HPC systems]]. The methods described in [[doku:modules]] can still be used for backwards compatibility, but we suggest using spack. On VSC-4 and VSC-5, spack is used to install and provide modules, see [[doku:spack|SPACK - a package manager for HPC systems]]. The methods described in [[doku:modules]] can still be used for backwards compatibility, but we suggest using spack.
  
 +In order to set environment variables needed for a specific application, the **module** environment may be used:
 +  * ''module avail''     lists the **available** Application-Software, Compilers, Parallel-Environment, and Libraries 
 +  * ''module list''      shows currently loaded package of your session
 +  * ''module unload <xyz>'' unload a particular package <xyz> from your session
 +  * ''module load <xyz>'' load a particular package <xyz> into your session
 +  * ''module display <xyz>'' OR ''module show <xyz>'' show module details such as the full  path  of  the module file and all (or most) of the environment changes the modulefile will make if loaded
 +  * ''module purge'' unloads all loaded modulefiles
 +== Note: ==
 +
 +  - **<xyz>** format corresponds exactly to the output of ''module avail''. Thus, in order to load or unload a selected module, copy and paste exactly the name listed by ''module avail''.\\ 
 +  - a list of ''module load/unload'' directives may also be included in the top part of a job submission script\\ 
 +
 +When all required/intended modules have been loaded, user packages may be compiled as usual.
 ==== Node configuration - hyperthreading ==== ==== Node configuration - hyperthreading ====
  
Line 94: Line 107:
 </code> </code>
  
 +==== Hybrid MPI/OMP: ====
  
 +SLURM Script:
 +<code>
 +#SBATCH -N 3 
 +#SBATCH --ntasks-per-node=2
 +#SBATCH -c 8
  
 +export OMP_NUM_THREADS=8
 +srun myhybridcode.exe
 +</code>
  
 +**mpirun** pins processes to cores.  
 +At least in the case of pure MPI processes (without any threads) the best performance has been observed with our default pinning (pinning to the physical cpus 0, 1, ..., 15). 
 +If you need to use hybrid MPI/openMP, you may have to disable our default pinning including the following line in the job script: 
 +<code> 
 +unset I_MPI_PIN_PROCESSOR_LIST 
 +export I_MPI_PIN_PROCESSOR_LIST=0,   # configuration for 2 processes / node 
 +export I_MPI_PIN_PROCESSOR_LIST=0,4,8,12    #              4 processes / node 
 +export I_MPI_PIN_PROCESSOR_LIST=0,2,4,6,8,10,12,14    #    8 processes / node 
 +</code> 
 +or use the shell script: 
 +<code> 
 +if [ $PROC_PER_NODE -gt 1 ] 
 +then 
 + unset I_MPI_PIN_PROCESSOR_LIST 
 + if [ $PROC_PER_NODE -eq 2 ] 
 + then 
 + export I_MPI_PIN_PROCESSOR_LIST=0,   # configuration for 2 processes / node 
 + elif [ $PROC_PER_NODE -eq 4 ] 
 + then 
 + export I_MPI_PIN_PROCESSOR_LIST=0,4,8,12    #              4 processes / node 
 + elif [ $PROC_PER_NODE -eq 8 ] 
 + then 
 + export I_MPI_PIN_PROCESSOR_LIST=0,2,4,6,8,10,12,14    #    8 processes / node 
 + else 
 + export I_MPI_PIN=disable 
 + fi 
 +fi 
 +</code> 
 +See also the [[https://software.intel.com/sites/products/documentation/hpc/ics/impi/41/lin/Reference_Manual/Environment_Variables_Process_Pinning.htm|Intel Environment Variables]].
  
  
  • doku/slurm.txt
  • Last modified: 2024/07/11 09:05
  • by grokyta