Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
doku:vsc5quickstart [2022/06/23 13:22] – [Submit a Job] jz | doku:vsc5quickstart [2023/01/05 21:05] – [SLURM] goldenberg | ||
---|---|---|---|
Line 160: | Line 160: | ||
$ sinfo -o %P | $ sinfo -o %P | ||
PARTITION | PARTITION | ||
- | gpu_a100_dual* | + | zen2_0256_a40x2 |
+ | jupyter -> reserved for the jupyterhub | ||
+ | login5 -> login nodes, not an actual slurm partition | ||
+ | zen3_2048 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 2TB RAM | ||
+ | zen3_1024 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 1TB RAM | ||
+ | zen3_0512* -> The default partition. AMD CPU nodes with 2x AMD Epyc (Milan) | ||
cascadelake_0384 -> Intel CPU nodes with 2x Intel Cascadelake and 384GB RAM | cascadelake_0384 -> Intel CPU nodes with 2x Intel Cascadelake and 384GB RAM | ||
- | zen3_0512 | + | zen3_0512_a100x2 |
- | zen3_1024 -> AMD CPU nodes with 2x AMD Epyc (Milan) | + | |
- | zen3_2048 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 2TB RAM | + | |
- | </ | + | |
- | |||
- | < | ||
- | $ sinfo | ||
- | PARTITION | ||
- | gpu_a100_dual | ||
- | cascadelake_0384* | ||
</ | </ | ||
Line 214: | Line 210: | ||
</ | </ | ||
- | <file sh zen3_1023.sh> | + | <file sh zen3_1024.sh> |
#!/bin/sh | #!/bin/sh | ||
#SBATCH -J < | #SBATCH -J < | ||
Line 223: | Line 219: | ||
</ | </ | ||
- | <file sh zen3_1024.sh> | + | <file sh zen3_2048.sh> |
#!/bin/sh | #!/bin/sh | ||
#SBATCH -J < | #SBATCH -J < | ||
#SBATCH -N 1 | #SBATCH -N 1 | ||
- | #SBATCH --partition=zen3_1024 | + | #SBATCH --partition=zen3_2048 |
#SBATCH --qos goodluck | #SBATCH --qos goodluck | ||
./ | ./ | ||
Line 262: | Line 258: | ||
Official Slurm documentation: | Official Slurm documentation: | ||
+ | |||
+ | ===== Intel MPI ===== | ||
+ | |||
+ | When **using Intel-MPI on the AMD nodes and mpirun** please set the following environment variable in your job script to allow for correct process pinning: | ||
+ | |||
+ | < | ||
+ | export I_MPI_PIN_RESPECT_CPUSET=0 | ||
+ | </ | ||