This version is outdated by a newer approved version.This version (2022/06/20 09:01) was approved by msiegel.
This is an old revision of the document!
Running multiple mpi jobs concurrently
Sample job script to run two MPI tasks within one job script concurrently:
VSC-4
Sample Job when for running multiple mpi jobs on a VSC-4 node.
Note: The “mem_per_task” should be set such that
mem_per_task * mytasks < mem_per_node - 2Gb
The approx 2Gb reduction in available memory is due to operating system stored in memory. For a standard node with 96 Gb of Memory this would be eg.:
23 Gb * 4 = 92 Gb < 94 Gb
#!/bin/bash #SBATCH -J many #SBATCH -N 1 export SLURM_STEP_GRES=none mytasks=4 cmd="stress -c 24" mem_per_task=10G for i in `seq 1 $mytasks` do srun --mem=$mem_per_task --cpus-per-task=2 --ntasks=1 $cmd & done wait
VSC-3
With srun:
#!/bin/bash #SBATCH -J test #SBATCH -N 1 #SBATCH --ntasks-per-core=1 #SBATCH --ntasks-per-node=2 export SLURM_STEP_GRES=none module load intel/18 intel-mpi/2018 for i in 0 8 do j=$(($i+1)) srun -n 2 --cpu_bind=map_cpu:$i,$j ./hello_world_intelmpi2018 & done wait exit 0
With mpirun (Intel MPI):
#!/bin/bash #SBATCH -J test #SBATCH -N 1 #SBATCH --ntasks-per-core=1 #SBATCH --ntasks-per-node=2 export SLURM_STEP_GRES=none module load intel/18 intel-mpi/2018 for i in 0 8 do j=$(($i+1)) mpirun -env I_MPI_PIN_PROCESSOR_LIST $i,$j -np 2 ./hello_world_intelmpi2018 & done wait exit 0
You can download the C code example here: hello_world.c
Compile it e.g. with:
# module load intel/18 intel-mpi/2018 # mpiicc -lhwloc hello_world.c -o hello_world_intelmpi2018