Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
storage_vsc3 [2014/11/18 20:06] – [Fair Use] malexand | storage_vsc3 [2022/02/01 21:26] (current) – removed goldenberg | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ===== VSC-3 Storage Full Production/ | ||
- | VSC-3 will provide two types of main storage: | ||
- | |||
- | The high-performance parallel Filesystem Fraunhofer BeeGFS and the Network File System (NFS). They are accessible under: | ||
- | |||
- | * **NFS**: $HOME/nfs | ||
- | * **BeeGFS**: $HOME/fhgfs (same as $GLOBAL) | ||
- | * **Per-node logical BeeGFS Scratch Directories**: | ||
- | |||
- | * **Local temporary ram disk** $TMPDIR (same as $TMP) | ||
- | |||
- | $HOME=/ | ||
- | $GLOBAL=/ | ||
- | $SCRATCH=/ | ||
- | $TMPDIR=/ | ||
- | |||
- | |||
- | ==== Usage ==== | ||
- | |||
- | The preferred location for files in compute runs is the BeeGFS parallel filesystem. Alternatively, | ||
- | |||
- | Please mind that $HOME/fhgfs is a shared resource over all projects. | ||
- | |||
- | === Per-node Scratch Directories $SCRATCH === | ||
- | |||
- | Local scratch directories on each node are provided as a link to the Fraunhofer parallel file system and can thus be viewed also via the login nodes as '''/ | ||
- | The parallel file system (and thus the performance) is identical between $SCRATCH and $GLOBAL. | ||
- | The variable '' | ||
- | < | ||
- | $ echo $SCRATCH | ||
- | /scratch | ||
- | </ | ||
- | These directories are purged after job execution. | ||
- | |||
- | === Local temporary ram disk $TMPDIR === | ||
- | |||
- | For smaller files and very fast access, restricted to single nodes, the variables '' | ||
- | < | ||
- | $ echo $TMP -- $TMPDIR | ||
- | / | ||
- | </ | ||
- | These directories are purged after job execution. | ||
- | |||
- | Please refrain from writing directly to the operating system directory '''/ | ||
- | |||
- | === Joblocal scratch directory $JOBLOCAL === | ||
- | |||
- | $JOBLOCAL is a per user job temporary storage facility available on VSC-2 on an experimental basis. | ||
- | |||
- | This method scales very well up to several hundred similar jobs. The filesystems underlying protocol is the SCSCI RDMA protocol (SRP). | ||
- | |||
- | $JOBLOCAL is not initially not available on VSC-3 during the phase and might be provided at a later time. | ||
- | |||
- | |||
- | |||
- | ==== Quotas ==== | ||
- | |||
- | The default quota of $HOME is 40GB. | ||
- | |||
- | Storage extensions can be requested through [[https:// | ||
- | |||
- | ==== Fair Use ==== | ||
- | |||
- | The storage resources underlying NFS and BeeGFS are shared. Please utilize BeeGFS primarily for large I/O intensive runs. The number of files per run or per project is not hard limited. Yet, it is strongly discouraged to create/ | ||
- | |||
- | |||
- | ==== Backup Policy ==== | ||
- | |||
- | On VSC-3 $HOME of up to 40GB is periodically backuped. No other data on $HOME such as $HOME/nfs and $HOME/fhgfs is under backup: backup of files in $HOME/nfs and $HOME/fhgfs ($GLOBAL)is solely the responsibility of each user. | ||
- | |||
- | VSC-3 NFS and BeeGFS servers utilize RAID-6 that can sustain up to 2 disks failing concurrently. The data path is otherwise not redundant. Data loss may also occur due to failure modes including, but not limited to disk controller failure and filesystem software faults. | ||
- | |||
- | User data on VSC-1 and VSC-2 [[doku: |