Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
storage_vsc3 [2014/11/21 16:52] – [Support] malexandstorage_vsc3 [2022/02/01 21:26] (current) – removed goldenberg
Line 1: Line 1:
-===== VSC-3 Storage Full Production/Post-Testbetrieb ===== 
- 
-VSC-3 will provide two types of main storage: 
- 
-The high-performance parallel Filesystem Fraunhofer BeeGFS and the Network File System (NFS). They are accessible under: 
- 
-  * **NFS**: $HOME/nfs 
-  * **BeeGFS**: $HOME/fhgfs (same as $GLOBAL) 
-  * **Per-node logical BeeGFS Scratch Directories**: $SCRATCH  
- 
-  * **Local temporary ram disk** $TMPDIR (same as $TMP) 
- 
-$HOME=/home/lv<projectnumber>/username \\ 
-$GLOBAL=/fhgfs/lv<projectnumber>/username \\ 
-$SCRATCH=/fhgfs/rXXnXX/ \\ 
-$TMPDIR=/tmp/123456.789.queue.q (example) \\ 
- 
- 
-==== Usage ==== 
- 
-The preferred location for files in compute runs is the BeeGFS parallel filesystem. Alternatively, $HOME/nfs can be used. $HOME itself can e.g. hold results, settings and source code up to 40GB. Conversely, it should not be utilized to persist temporary data in compute runs. $HOME itself is backuped, other storage under $HOME, such as $HOME/nfs and $HOME/fhgfs is not.  
- 
-Please mind that $HOME/fhgfs is a shared resource over all projects.  
- 
-=== Per-node Scratch Directories $SCRATCH === 
- 
-Local scratch directories on each node are provided as a link to the Fraunhofer parallel file system and can thus be viewed also via the login nodes as '''/fhgfs/rXXnXX/''' 
-The parallel file system (and thus the performance) is identical between $SCRATCH and $GLOBAL. 
-The variable ''$SCRATCH'' expands as: 
-<code> 
-$ echo $SCRATCH 
-/scratch 
-</code> 
-These directories are purged after job execution. 
- 
-=== Local temporary ram disk $TMPDIR === 
- 
-For smaller files and very fast access, restricted to single nodes, the variables ''$TMP'' or ''$TMPDIR'' may be used which expand equally to 
-<code> 
-$ echo $TMP -- $TMPDIR 
-/tmp/123456.789.queue.q -- /tmp/123456.789.queue.q 
-</code> 
-These directories are purged after job execution. 
- 
-Please refrain from writing directly to the operating system directory '''/tmp'''! 
- 
-=== Joblocal scratch directory $JOBLOCAL === 
- 
-$JOBLOCAL is a per user job temporary storage facility available on VSC-2 on an experimental basis. 
- 
-This method scales very well up to several hundred similar jobs. The filesystems underlying protocol is the SCSCI RDMA protocol (SRP).  
- 
-$JOBLOCAL is not initially not available on VSC-3 during the phase and might be provided at a later time. 
-  
- 
- 
-==== Quotas ==== 
- 
-The default quota of $HOME is 40GB. 
- 
-Storage extensions can be requested through [[https://service.vsc.tuwien.ac.at/|Vergabeassistent]] at Extensions - Storage. 
- 
-==== Fair Use ==== 
- 
-The storage resources underlying NFS and BeeGFS are shared. Please utilize BeeGFS primarily for large I/O intensive runs. The number of files per run or per project is not hard limited. Yet, it is strongly discouraged to create/operate on O(10E5) and above number of files. If millions of (small) files are required for a code, please contact system operation in advance as performance impact on other users can occur.  
- 
- 
-==== Support ==== 
- 
-Parallel filesystems used in large scale computing are unlike desktop file systems. Contact VSC staff in planning for high I/O computation. Also, VSC can support architecting one-time and recurrent large ingress-egress data pipelines, recurrent large data transfer workflows, and support optimizing codes for parallel I/O. 
- 
- 
-==== Backup Policy ==== 
- 
-On VSC-3 $HOME of up to 40GB is periodically backuped. **No other data on $HOME such as $HOME/nfs and $HOME/fhgfs is under backup**: backup of files in $HOME/nfs and $HOME/fhgfs ($GLOBAL)is solely the responsibility of each user. 
- 
-VSC-3 NFS and BeeGFS servers utilize RAID-6 that can sustain up to 2 disks failing concurrently. The data path is otherwise not redundant. Data loss may also occur due to failure modes including, but not limited to disk controller failure and filesystem software faults. 
- 
-User data on VSC-1 and VSC-2 [[doku:backup|is not backuped]]. 
- 
  
  • storage_vsc3.1416588743.txt.gz
  • Last modified: 2014/11/21 16:52
  • by malexand