This version is outdated by a newer approved version.This version (2022/11/04 10:45) is a draft.
Approvals: 0/1The Previously approved version (2021/04/01 07:08) is available.
Approvals: 0/1The Previously approved version (2021/04/01 07:08) is available.
This is an old revision of the document!
Running multiple mpi jobs concurrently
Sample job script to run two MPI tasks within one job script concurrently:
VSC-4
Sample Job when for running multiple mpi jobs on a VSC-4 node.
Note: The “mem_per_task” should be set such that
mem_per_task * mytasks < mem_per_node - 2Gb
The approx 2Gb reduction in available memory is due to operating system stored in memory. For a standard node with 96 Gb of Memory this would be eg.:
23 Gb * 4 = 92 Gb < 94 Gb
#!/bin/bash #SBATCH -J many #SBATCH -N 1 export SLURM_STEP_GRES=none mytasks=4 cmd="stress -c 24" mem_per_task=10G for i in `seq 1 $mytasks` do srun --mem=$mem_per_task --cpus-per-task=2 --ntasks=1 $cmd & done wait
VSC-3
With srun:
#!/bin/bash #SBATCH -J test #SBATCH -N 1 #SBATCH --ntasks-per-core=1 #SBATCH --ntasks-per-node=2 export SLURM_STEP_GRES=none module load intel/18 intel-mpi/2018 for i in 0 8 do j=$(($i+1)) srun -n 2 --cpu_bind=map_cpu:$i,$j ./hello_world_intelmpi2018 & done wait exit 0