Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:vsc5quickstart [2022/06/24 09:53] – [Submit a Job] jzdoku:vsc5quickstart [2023/01/05 21:05] – [SLURM] goldenberg
Line 160: Line 160:
 $ sinfo -o %P $ sinfo -o %P
 PARTITION PARTITION
-gpu_a100_dual* -> Currently the default partition. AMD CPU nodes with 2x AMD Epyc (Milan) and 2x NIVIDA A100 and 512GB RAM +zen2_0256_a40x2 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 2x NIVIDA A40 and 256GB RAM 
-cascadelake_0384 -> Intel CPU nodes with 2x Intel Cascadelake and 384GB RAM +jupyter -> reserved for the jupyterhub 
-zen3_0512 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 512GB RAM +login5 -> login nodes, not an actual slurm partition
-zen3_1024 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 1TB RAM+
 zen3_2048 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 2TB RAM zen3_2048 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 2TB RAM
 +zen3_1024 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 1TB RAM
 +zen3_0512* -> The default partition. AMD CPU nodes with 2x AMD Epyc (Milan) and 512GB RAM
 +cascadelake_0384 -> Intel CPU nodes with 2x Intel Cascadelake and 384GB RAM
 +zen3_0512_a100x2 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 2x NIVIDA A100 and 512GB RAM
 +
 </code> </code>
  
Line 257: Line 261:
 ===== Intel MPI ===== ===== Intel MPI =====
  
-When using Intel-MPI and mpirun please set the following environment variable in your job script to allow for correct process pinning:+When **using Intel-MPI on the AMD nodes and mpirun** please set the following environment variable in your job script to allow for correct process pinning:
  
 <code> <code>
  • doku/vsc5quickstart.txt
  • Last modified: 2023/05/17 15:28
  • by msiegel