Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revisionBoth sides next revision
doku:multimpi [2019/03/12 12:51] jzdoku:multimpi [2022/11/04 10:45] – [With srun:] goldenberg
Line 3: Line 3:
 Sample job script to run two MPI tasks within one job script concurrently: Sample job script to run two MPI tasks within one job script concurrently:
  
-== With srun: == +==== VSC-4 =====
-<code> +
-#!/bin/bash +
-#SBATCH -J test +
-#SBATCH -N 1 +
-#SBATCH --ntasks-per-core=+
-#SBATCH --ntasks-per-node=2+
  
-export SLURM_STEP_GRES=none+Sample Job when for running multiple mpi jobs on a VSC-4 node. 
  
-module load intel/18 intel-mpi/2018+Note: The "mem_per_task" should be set such that
  
-for i in 0 8 +<code> 
-do +mem_per_task * mytasks < mem_per_node - 2Gb 
-  j=$(($i+1)) +</code>
-  srun -n 2 --cpu_bind=map_cpu:$i,$j ./hello_world_intelmpi2018 & +
-done +
-wait+
  
-exit 0+The approx 2Gb reduction in available memory is due to operating system stored in memory. For a standard node with 96 Gb of Memory this would be eg.: 
 + 
 +<code> 
 +23 Gb * 4 = 92 Gb < 94 Gb
 </code> </code>
- 
-== With mpirun (Intel MPI): == 
  
 <code> <code>
 #!/bin/bash #!/bin/bash
-#SBATCH -J test+#SBATCH -J many
 #SBATCH -N 1 #SBATCH -N 1
-#SBATCH --ntasks-per-core=1 
-#SBATCH --ntasks-per-node=2 
  
 export SLURM_STEP_GRES=none export SLURM_STEP_GRES=none
  
-module load intel/18 intel-mpi/2018+mytasks=4 
 +cmd="stress -c 24" 
 +mem_per_task=10G
  
-for i in 0 8+for i in `seq 1 $mytasks`
 do do
-  j=$(($i+1)) +        srun --mem=$mem_per_task --cpus-per-task=--ntasks=1 $cmd &
-  mpirun -env I_MPI_PIN_PROCESSOR_LIST $i,$j -np ./hello_world_intelmpi2018 &+
 done done
 wait wait
  
-exit 0 
 </code> </code>
  
-You can download the C code example here: {{ :doku:hello_world.c |hello_world.c}}+==== VSC-3 =====
  
  
  • doku/multimpi.txt
  • Last modified: 2023/03/14 12:53
  • by goldenberg