Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:vsc5quickstart [2022/04/27 11:13] – [Submit a Job] jzdoku:vsc5quickstart [2023/01/05 21:05] – [SLURM] goldenberg
Line 160: Line 160:
 $ sinfo -o %P $ sinfo -o %P
 PARTITION PARTITION
-gpu_a100_dual* -> Currently the default partitionAMD CPU nodes with 2x AMD Epyc (Milan) and 2x NIVIDA A100 and 512GB RAM+zen2_0256_a40x2 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 2x NIVIDA A40 and 256GB RAM 
 +jupyter -> reserved for the jupyterhub 
 +login5 -> login nodes, not an actual slurm partition 
 +zen3_2048 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 2TB RAM 
 +zen3_1024 -> AMD CPU nodes with 2x AMD Epyc (Milan) and 1TB RAM 
 +zen3_0512* -> The default partition. AMD CPU nodes with 2x AMD Epyc (Milan) and 512GB RAM
 cascadelake_0384 -> Intel CPU nodes with 2x Intel Cascadelake and 384GB RAM cascadelake_0384 -> Intel CPU nodes with 2x Intel Cascadelake and 384GB RAM
-</code>+zen3_0512_a100x2 -AMD CPU nodes with 2x AMD Epyc (Milan) and 2x NIVIDA A100 and 512GB RAM
  
- 
-<code> 
-$ sinfo 
-PARTITION         AVAIL  TIMELIMIT  NODES  STATE NODELIST 
-gpu_a100_dual        up   infinite     40   idle n571-[001-015],n572-[001-015],n573-[001-010] 
-cascadelake_0384*    up   infinite     48   idle n451-[001-024],n452-[001-024] 
 </code> </code>
  
Line 199: Line 198:
 ./my_program ./my_program
 </file> </file>
 +
 +Job Scripts for the AMD CPU nodes:
 +
 +<file sh zen3_0512.sh>
 +#!/bin/sh
 +#SBATCH -J <meaningful name for job>
 +#SBATCH -N 1
 +#SBATCH --partition=zen3_0512
 +#SBATCH --qos goodluck
 +./my_program
 +</file>
 +
 +<file sh zen3_1024.sh>
 +#!/bin/sh
 +#SBATCH -J <meaningful name for job>
 +#SBATCH -N 1
 +#SBATCH --partition=zen3_1024
 +#SBATCH --qos goodluck
 +./my_program
 +</file>
 +
 +<file sh zen3_2048.sh>
 +#!/bin/sh
 +#SBATCH -J <meaningful name for job>
 +#SBATCH -N 1
 +#SBATCH --partition=zen3_2048
 +#SBATCH --qos goodluck
 +./my_program
 +</file>
 +
  
 Example job script to use both GPUs on a GPU nodes: Example job script to use both GPUs on a GPU nodes:
Line 229: Line 258:
  
 Official Slurm documentation: https://slurm.schedmd.com Official Slurm documentation: https://slurm.schedmd.com
 +
 +===== Intel MPI =====
 +
 +When **using Intel-MPI on the AMD nodes and mpirun** please set the following environment variable in your job script to allow for correct process pinning:
 +
 +<code>
 +export I_MPI_PIN_RESPECT_CPUSET=0
 +</code>
  
  • doku/vsc5quickstart.txt
  • Last modified: 2023/05/17 15:28
  • by msiegel