Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revisionBoth sides next revision
doku:multimpi [2021/04/01 07:03] markusdoku:multimpi [2022/11/04 10:45] – [With srun:] goldenberg
Line 5: Line 5:
 ==== VSC-4 ===== ==== VSC-4 =====
  
-Sample Job when for running multiple mpi jobs on a VSC-4 node. Note:+Sample Job when for running multiple mpi jobs on a VSC-4 node.  
 + 
 +Note: The "mem_per_task" should be set such that 
 + 
 +<code> 
 +mem_per_task * mytasks < mem_per_node - 2Gb 
 +</code> 
 + 
 +The approx 2Gb reduction in available memory is due to operating system stored in memory. For a standard node with 96 Gb of Memory this would be eg.: 
 + 
 +<code> 
 +23 Gb * 4 = 92 Gb < 94 Gb 
 +</code>
  
 <code> <code>
Line 27: Line 39:
  
 ==== VSC-3 ===== ==== VSC-3 =====
- 
-=== With srun: === 
-<code> 
-#!/bin/bash 
-#SBATCH -J test 
-#SBATCH -N 1 
-#SBATCH --ntasks-per-core=1 
-#SBATCH --ntasks-per-node=2 
- 
-export SLURM_STEP_GRES=none 
- 
-module load intel/18 intel-mpi/2018 
- 
-for i in 0 8 
-do 
-  j=$(($i+1)) 
-  srun -n 2 --cpu_bind=map_cpu:$i,$j ./hello_world_intelmpi2018 & 
-done 
-wait 
- 
-exit 0 
-</code> 
- 
-=== With mpirun (Intel MPI): === 
- 
-<code> 
-#!/bin/bash 
-#SBATCH -J test 
-#SBATCH -N 1 
-#SBATCH --ntasks-per-core=1 
-#SBATCH --ntasks-per-node=2 
- 
-export SLURM_STEP_GRES=none 
- 
-module load intel/18 intel-mpi/2018 
- 
-for i in 0 8 
-do 
-  j=$(($i+1)) 
-  mpirun -env I_MPI_PIN_PROCESSOR_LIST $i,$j -np 2 ./hello_world_intelmpi2018 & 
-done 
-wait 
- 
-exit 0 
-</code> 
- 
-You can download the C code example here: {{ :doku:hello_world.c |hello_world.c}} 
- 
-Compile it e.g. with: 
- 
-<code> 
-# module load intel/18 intel-mpi/2018 
-# mpiicc -lhwloc hello_world.c -o hello_world_intelmpi2018 
-</code> 
  
  
  • doku/multimpi.txt
  • Last modified: 2023/03/14 12:53
  • by goldenberg