This version (2024/10/24 10:28) is a draft.
Approvals: 0/1

pictures_cluster-uebersicht-vsc.jpg


pictures_cluster-schema.jpg


  • 2 Login nodes to transfer data, prepare and submit job files
  • 2 NFS servers holding /home, /calc and /scratch directories
  • Submit jobs to SLURM
  • SLURM sends jobs to compute nodes

Data can be written to:

  • Scratch directory: /scratch/username (deleted after job)
  • Calc directory: /calc/username (persistent)
  • /tmp: local disk on node (deleted after job)
  • /dev/shm: in-memory file system (deleted after job)
  • Home directory: /home/username
    • you can write here (technically) but you should not use it intentionally
    • use /calc/username instead

partitionCPU cores threads memorynodename
E5-2690v42x Intel Xeon CPU E5-2690 v4 @ 2.60GHz 28 28 128 GBc1-[01-12]
E5-2690v42x Intel Xeon CPU E5-2690 v4 @ 2.60GHz 28 28 256 GBc2-[01-08]
Phi 1x Intel Xeon Phi CPU 7210 @ 1.30GHz 64 256 208 GBc3-[01-08]
E5-1650 1x Intel Xeon CPU E5-1650 v4 @ 3.60GHz 6 6 64 GBc4-[01-16]

The current cluster (“smmpmech.unileoben.ac.at”) will be integrated into the new cluster as additional partitions.


  • Mellanox FDR Infiniband Network, MT27500 ConnectX-3 (56 Gbit/s)
    • c1-[01-12], c2-[01-08]
    • storage server (f1,f2)
    • login nodes
  • Gigabit Ethernet
    • all servers and compute nodes

  • pandoc/introduction-to-mul-cluster/01_introduction/02_system_layout.txt
  • Last modified: 2024/10/24 10:28
  • by 127.0.0.1