Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
doku:vsc5quickstart [2022/06/23 13:17] – [SLURM] jz | doku:vsc5quickstart [2022/06/24 09:54] – [Intel MPI] jz | ||
---|---|---|---|
Line 165: | Line 165: | ||
zen3_1024 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 1TB RAM | zen3_1024 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 1TB RAM | ||
zen3_2048 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 2TB RAM | zen3_2048 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 2TB RAM | ||
- | </ | ||
- | |||
- | |||
- | < | ||
- | $ sinfo | ||
- | PARTITION | ||
- | gpu_a100_dual | ||
- | cascadelake_0384* | ||
</ | </ | ||
Line 202: | Line 194: | ||
./ | ./ | ||
</ | </ | ||
+ | |||
+ | Job Scripts for the AMD CPU nodes: | ||
+ | |||
+ | <file sh zen3_0512.sh> | ||
+ | #!/bin/sh | ||
+ | #SBATCH -J < | ||
+ | #SBATCH -N 1 | ||
+ | #SBATCH --partition=zen3_0512 | ||
+ | #SBATCH --qos goodluck | ||
+ | ./ | ||
+ | </ | ||
+ | |||
+ | <file sh zen3_1024.sh> | ||
+ | #!/bin/sh | ||
+ | #SBATCH -J < | ||
+ | #SBATCH -N 1 | ||
+ | #SBATCH --partition=zen3_1024 | ||
+ | #SBATCH --qos goodluck | ||
+ | ./ | ||
+ | </ | ||
+ | |||
+ | <file sh zen3_2048.sh> | ||
+ | #!/bin/sh | ||
+ | #SBATCH -J < | ||
+ | #SBATCH -N 1 | ||
+ | #SBATCH --partition=zen3_2048 | ||
+ | #SBATCH --qos goodluck | ||
+ | ./ | ||
+ | </ | ||
+ | |||
Example job script to use both GPUs on a GPU nodes: | Example job script to use both GPUs on a GPU nodes: | ||
Line 232: | Line 254: | ||
Official Slurm documentation: | Official Slurm documentation: | ||
+ | |||
+ | ===== Intel MPI ===== | ||
+ | |||
+ | When **using Intel-MPI on the AMD nodes and mpirun** please set the following environment variable in your job script to allow for correct process pinning: | ||
+ | |||
+ | < | ||
+ | export I_MPI_PIN_RESPECT_CPUSET=0 | ||
+ | </ | ||