Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revisionLast revisionBoth sides next revision | ||
doku:multimpi [2021/04/01 07:03] – markus | doku:multimpi [2022/11/04 10:45] – [With srun:] goldenberg | ||
---|---|---|---|
Line 5: | Line 5: | ||
==== VSC-4 ===== | ==== VSC-4 ===== | ||
- | Sample Job when for running multiple mpi jobs on a VSC-4 node. Note: | + | Sample Job when for running multiple mpi jobs on a VSC-4 node. |
+ | |||
+ | Note: The " | ||
+ | |||
+ | < | ||
+ | mem_per_task * mytasks < mem_per_node - 2Gb | ||
+ | </ | ||
+ | |||
+ | The approx 2Gb reduction in available memory is due to operating system stored in memory. For a standard node with 96 Gb of Memory this would be eg.: | ||
+ | |||
+ | < | ||
+ | 23 Gb * 4 = 92 Gb < 94 Gb | ||
+ | </ | ||
< | < | ||
Line 27: | Line 39: | ||
==== VSC-3 ===== | ==== VSC-3 ===== | ||
- | |||
- | === With srun: === | ||
- | < | ||
- | #!/bin/bash | ||
- | #SBATCH -J test | ||
- | #SBATCH -N 1 | ||
- | #SBATCH --ntasks-per-core=1 | ||
- | #SBATCH --ntasks-per-node=2 | ||
- | |||
- | export SLURM_STEP_GRES=none | ||
- | |||
- | module load intel/18 intel-mpi/ | ||
- | |||
- | for i in 0 8 | ||
- | do | ||
- | j=$(($i+1)) | ||
- | srun -n 2 --cpu_bind=map_cpu: | ||
- | done | ||
- | wait | ||
- | |||
- | exit 0 | ||
- | </ | ||
- | |||
- | === With mpirun (Intel MPI): === | ||
- | |||
- | < | ||
- | #!/bin/bash | ||
- | #SBATCH -J test | ||
- | #SBATCH -N 1 | ||
- | #SBATCH --ntasks-per-core=1 | ||
- | #SBATCH --ntasks-per-node=2 | ||
- | |||
- | export SLURM_STEP_GRES=none | ||
- | |||
- | module load intel/18 intel-mpi/ | ||
- | |||
- | for i in 0 8 | ||
- | do | ||
- | j=$(($i+1)) | ||
- | mpirun -env I_MPI_PIN_PROCESSOR_LIST $i,$j -np 2 ./ | ||
- | done | ||
- | wait | ||
- | |||
- | exit 0 | ||
- | </ | ||
- | |||
- | You can download the C code example here: {{ : | ||
- | |||
- | Compile it e.g. with: | ||
- | |||
- | < | ||
- | # module load intel/18 intel-mpi/ | ||
- | # mpiicc -lhwloc hello_world.c -o hello_world_intelmpi2018 | ||
- | </ | ||