Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:multimpi [2019/03/12 12:52] jzdoku:multimpi [2021/04/01 07:08] – [VSC-4] markus
Line 3: Line 3:
 Sample job script to run two MPI tasks within one job script concurrently: Sample job script to run two MPI tasks within one job script concurrently:
  
-== With srun: ==+==== VSC-4 ===== 
 + 
 +Sample Job when for running multiple mpi jobs on a VSC-4 node.  
 + 
 +Note: The "mem_per_task" should be set such that 
 + 
 +<code> 
 +mem_per_task * mytasks < mem_per_node - 2Gb 
 +</code> 
 + 
 +The approx 2Gb reduction in available memory is due to operating system stored in memory. For a standard node with 96 Gb of Memory this would be eg.: 
 + 
 +<code> 
 +23 Gb * 4 = 92 Gb < 94 Gb 
 +</code> 
 + 
 +<code> 
 +#!/bin/bash 
 +#SBATCH -J many 
 +#SBATCH -N 1 
 + 
 +export SLURM_STEP_GRES=none 
 + 
 +mytasks=4 
 +cmd="stress -c 24" 
 +mem_per_task=10G 
 + 
 +for i in `seq 1 $mytasks` 
 +do 
 +        srun --mem=$mem_per_task --cpus-per-task=2 --ntasks=1 $cmd & 
 +done 
 +wait 
 + 
 +</code> 
 + 
 +==== VSC-3 ===== 
 + 
 +=== With srun: ===
 <code> <code>
 #!/bin/bash #!/bin/bash
Line 25: Line 62:
 </code> </code>
  
-== With mpirun (Intel MPI): ==+=== With mpirun (Intel MPI): ===
  
 <code> <code>
Line 49: Line 86:
  
 You can download the C code example here: {{ :doku:hello_world.c |hello_world.c}} You can download the C code example here: {{ :doku:hello_world.c |hello_world.c}}
 +
 Compile it e.g. with: Compile it e.g. with:
  
Line 54: Line 92:
 # module load intel/18 intel-mpi/2018 # module load intel/18 intel-mpi/2018
 # mpiicc -lhwloc hello_world.c -o hello_world_intelmpi2018 # mpiicc -lhwloc hello_world.c -o hello_world_intelmpi2018
-<code>+</code>
  
  
  • doku/multimpi.txt
  • Last modified: 2023/03/14 12:53
  • by goldenberg