Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:multimpi [2019/03/12 12:53] jzdoku:multimpi [2022/11/04 10:45] – [With mpirun (Intel MPI):] goldenberg
Line 3: Line 3:
 Sample job script to run two MPI tasks within one job script concurrently: Sample job script to run two MPI tasks within one job script concurrently:
  
-=== With srun: ===+==== VSC-4 ===== 
 + 
 +Sample Job when for running multiple mpi jobs on a VSC-4 node.  
 + 
 +Note: The "mem_per_task" should be set such that 
 + 
 +<code> 
 +mem_per_task * mytasks < mem_per_node - 2Gb 
 +</code> 
 + 
 +The approx 2Gb reduction in available memory is due to operating system stored in memory. For a standard node with 96 Gb of Memory this would be eg.: 
 + 
 +<code> 
 +23 Gb * 4 = 92 Gb < 94 Gb 
 +</code> 
 <code> <code>
 #!/bin/bash #!/bin/bash
-#SBATCH -J test+#SBATCH -J many
 #SBATCH -N 1 #SBATCH -N 1
-#SBATCH --ntasks-per-core=1 
-#SBATCH --ntasks-per-node=2 
  
 export SLURM_STEP_GRES=none export SLURM_STEP_GRES=none
  
-module load intel/18 intel-mpi/2018+mytasks=4 
 +cmd="stress -c 24" 
 +mem_per_task=10G
  
-for i in 0 8+for i in `seq 1 $mytasks`
 do do
-  j=$(($i+1)) +        srun --mem=$mem_per_task --cpus-per-task=2 --ntasks=$cmd &
-  srun -2 --cpu_bind=map_cpu:$i,$j ./hello_world_intelmpi2018 &+
 done done
 wait wait
  
-exit 0 
 </code> </code>
  
-=== With mpirun (Intel MPI): ===+==== VSC-3 =====
  
 +=== With srun: ===
 <code> <code>
 #!/bin/bash #!/bin/bash
Line 41: Line 55:
 do do
   j=$(($i+1))   j=$(($i+1))
-  mpirun -env I_MPI_PIN_PROCESSOR_LIST $i,$j -np 2 ./hello_world_intelmpi2018 &+  srun -n 2 --cpu_bind=map_cpu:$i,$j ./hello_world_intelmpi2018 &
 done done
 wait wait
  
 exit 0 exit 0
-</code> 
- 
-You can download the C code example here: {{ :doku:hello_world.c |hello_world.c}} 
- 
-Compile it e.g. with: 
- 
-<code> 
-# module load intel/18 intel-mpi/2018 
-# mpiicc -lhwloc hello_world.c -o hello_world_intelmpi2018 
 </code> </code>
  
  
  • doku/multimpi.txt
  • Last modified: 2023/03/14 12:53
  • by goldenberg