This version is outdated by a newer approved version.DiffThis version (2015/09/21 15:06) is a draft.
Approvals: 0/1

This is an old revision of the document!


It is proving increasingly difficult to exert control over how different threads are being assigned to the available CPU cores in multi-threaded OpenMP applications. Particularly troublesome are hybrid MPI/OpenMP codes where usually the developer has a clear idea of which regions to run in parallel, but relies on the OS for optimal assignment of different physical cores to the individual threads. A variety of methods do exist to explicitly state which CPU core should be linked to what particular thread, however, in practice many of these recommended ways of configuration turn out to be either non-functional, dependent on MPI versions, or frequently ineffective and overruled by the queuing system (e.g. SLURM). In the following we describe the auxiliary tool likwid-pin that has shown promise in successfully managing arbitrary thread assignment to individual CPU cores in a more general way.

Suppose we have the following little test program, test_mpit_var.c, and want to run it with 8 threads on a single compute node using the following set of physical cores: 3, 4, 2, 1, 6, 5, 7, 9. So after compilation like mpigcc -fopenmp ./test_mpit_var.c we could use the following submit script to SLURM

 #!/bin/bash
 #
 #SBATCH -J tmv     
 #SBATCH -N 1
 #SBATCH --time=00:01:00

 module purge
 module load intel-mpi/5 likwid/4.0
 
 export OMP_NUM_THREADS=8
 
 likwid-pin -c 3,3,4,2,1,6,5,7,9 ./a.out

Note the repeated declaration of the initial core #3 which is due to the fact that we are still calling one main task, which subsequently will branch out into 8 parallel threads.

  • doku/likwid.1442847996.txt.gz
  • Last modified: 2015/09/21 15:06
  • by sh