Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revisionBoth sides next revision
pandoc:introduction-to-mul-cluster:01_introduction:02_system_layout [2018/01/31 13:54] – Pandoc Auto-commit pandocpandoc:introduction-to-mul-cluster:01_introduction:02_system_layout [2018/02/27 14:32] – Pandoc Auto-commit pandoc
Line 16: Line 16:
  
   * 2 Login nodes to transfer data, prepare and submit job files   * 2 Login nodes to transfer data, prepare and submit job files
-  * 2 NFS servers holding home and scratch directories+  * 2 NFS servers holding /home, /calc and /scratch directories
   * Submit jobs to SLURM   * Submit jobs to SLURM
   * SLURM sends jobs to compute nodes   * SLURM sends jobs to compute nodes
Line 27: Line 27:
 Data can be written to: Data can be written to:
  
-  * Home directory: /home/<username> +  * Scratch directory: ''%%/scratch/username%%'' (deleted after job) 
-  * Scratch directory: /scratch (deleted after job+  * Calc directory: ''%%/calc/username%%'' (persistent
-  * /tmp1: local disk on node (deleted after job)???????????? +  * ''%%/tmp%%'': local disk on node (deleted after job) 
-  * /dev/shm: in-memory file system (deleted after job)????????????+  * ''%%/dev/shm%%'': in-memory file system (deleted after job) 
 +  * Home directory: ''%%/home/username%%'' 
 +    * you can write here (technically) but you should not use it intentionally 
 +    * use ''%%/calc/username%%'' instead
  
  
Line 40: Line 43:
  
 ^partition^CPU                                    cores  ^  threads  ^  memory^nodename  ^ ^partition^CPU                                    cores  ^  threads  ^  memory^nodename  ^
-|E5-2690v4|2x Intel Xeon CPU E5-2690 v4 @ 2.60GHz|   28    |    56     |  128 GB|c1-[01-12]| +|E5-2690v4|2x Intel Xeon CPU E5-2690 v4 @ 2.60GHz|   28    |    28     |  128 GB|c1-[01-12]| 
-|E5-2690v4|2x Intel Xeon CPU E5-2690 v4 @ 2.60GHz|   28    |    56     |  256 GB|c2-[01-08]|+|E5-2690v4|2x Intel Xeon CPU E5-2690 v4 @ 2.60GHz|   28    |    28     |  256 GB|c2-[01-08]|
 |Phi      |1x Intel Xeon Phi CPU 7210 @ 1.30GHz  |   64    |    256    |  208 GB|c3-[01-08]| |Phi      |1x Intel Xeon Phi CPU 7210 @ 1.30GHz  |   64    |    256    |  208 GB|c3-[01-08]|
-|E5-1650v4|1x Intel Xeon CPU E5-1650 v4 @ 3.60GHz|    6    |    12     |   64 GB|c4-[01-16]|+|E5-1650  |1x Intel Xeon CPU E5-1650 v4 @ 3.60GHz|    6    |     6     |   64 GB|c4-[01-16]| 
 + 
 +The current cluster (“smmpmech.unileoben.ac.at”) will be integrated into the new cluster as additional partitions. 
 + 
 + 
 +---- 
 + 
 +===== Network ===== 
 + 
 +  * Mellanox FDR Infiniband Network, MT27500 ConnectX-3 (56 Gbit/s) 
 +    * c1-[01-12], c2-[01-08] 
 +    * storage server (f1,f2) 
 +    * login nodes 
 +  * Gigabit Ethernet 
 +    * all servers and compute nodes
  
  
 ---- ----
  
  • pandoc/introduction-to-mul-cluster/01_introduction/02_system_layout.txt
  • Last modified: 2020/10/20 09:13
  • by pandoc