The most recent version of this page is a draft.DiffThis version (2018/02/02 10:51) is a draft.
Approvals: 0/1

This is an old revision of the document!


pictures_cluster-uebersicht-vsc.jpg


pictures_cluster-schema.jpg


  • 2 Login nodes to transfer data, prepare and submit job files
  • 2 NFS servers holding home and scratch directories
  • Submit jobs to SLURM
  • SLURM sends jobs to compute nodes

Data can be written to:

  • Home directory: /home/<username>
  • Scratch directory: /scratch (deleted after job)
  • Calc directory: /calc/<html><username></html> (persistent)
  • /tmp1: local disk on node (deleted after job)????????????
  • /dev/shm: in-memory file system (deleted after job)????????????

partitionCPU cores threads memorynodename
E5-2690v42x Intel Xeon CPU E5-2690 v4 @ 2.60GHz 28 56 128 GBc1-[01-12]
E5-2690v42x Intel Xeon CPU E5-2690 v4 @ 2.60GHz 28 56 256 GBc2-[01-08]
Phi 1x Intel Xeon Phi CPU 7210 @ 1.30GHz 64 256 208 GBc3-[01-08]
E5-1650v41x Intel Xeon CPU E5-1650 v4 @ 3.60GHz 6 12 64 GBc4-[01-16]

  • Mellanox FDR Infiniband Network, MT27500 ConnectX-3 (56 Gbit/s)
    • c1-[01-12], c2-[01-08]
    • storage server (f1,f2)
    • login nodes
  • Gigabit Ethernet
    • all servers and compute nodes

  • pandoc/introduction-to-mul-cluster/01_introduction/02_system_layout.1517568707.txt.gz
  • Last modified: 2018/02/02 10:51
  • by pandoc