Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
doku:vsc3_storage [2017/08/22 09:58] – markus | doku:vsc3_storage [2021/08/23 08:52] (current) – [Quotas] goldenberg | ||
---|---|---|---|
Line 2: | Line 2: | ||
===== VSC-3 Storage ===== | ===== VSC-3 Storage ===== | ||
- | This article is about the $GLOBAL and $HOME filesystems of VSC-3. If you are searching for info about the bioinformatics storage, the article can be found [[binf_nodes|here]]. | + | **The '' |
+ | |||
+ | This article is about the '' | ||
VSC-3 provides three facilities for persisting data: the high-performance BeeGFS Parallel Filesystem (former Fraunhofer Parallel Filesystem, FhGFS), the Network File System (NFS) and a node-local ramdisk. They are accessible under: | VSC-3 provides three facilities for persisting data: the high-performance BeeGFS Parallel Filesystem (former Fraunhofer Parallel Filesystem, FhGFS), the Network File System (NFS) and a node-local ramdisk. They are accessible under: | ||
- | * **NFS**: $HOME which expands to / | + | * **NFS**: |
- | * **BeeGFS (former FhGFS)**: $GLOBAL expands to / | + | * **BeeGFS (former FhGFS)**: |
- | * **Scratch RAM Disk** $TMPDIR </ | + | * '' |
+ | * '' | ||
+ | * **Scratch RAM Disk** | ||
- | ==== Usage $HOME ==== | + | ==== Usage '' |
- | "$HOME" | + | '' |
- | Backup of $HOME is user responsibility. | + | Backup of '' |
- | $HOME is provided from file servers with disk arrays that are exported over the network file system (NFS). Even on highly scaled storage such on VSC-3, the number of concurrent file operations is bound by spinning disk physics: small file (write) operations can easily saturate capacity. Hence, please mind that $HOME is a shared resource over all projects on a given NFS server. In case your project requires persistence over a large number of small files please contact VSC administration in advance. | + | '' |
- | ==== Scratch Space Usage: $GLOBAL and $SCRATCH ==== | ||
- | The [[http:// | ||
- | |||
- | < | ||
- | $ echo $GLOBAL | ||
- | / | ||
- | </ | ||
- | |||
- | The directory is writeable as user and readable by the group members. It is advisable to make use of these directories in particular for jobs with heavy I/O operations. In addition it will reduce the load on the fileserver holding the $HOME directories. | ||
- | |||
- | The BeeGFS (former Fraunhofer parallel file system) is shared by all users and by all nodes. | ||
- | Single jobs producing heavy load (>> | ||
- | |||
- | Lifetime of data is limited, see table below. | ||
- | |||
- | |||
- | === Per-node Scratch Directories $SCRATCH === | ||
- | |||
- | Local scratch directories on each node are provided as a link to the BeeGFS parallel file system and can thus be viewed also via the login nodes as '''/ | ||
- | The parallel file system (and thus the performance) is identical between $SCRATCH and $GLOBAL. | ||
- | The variable '' | ||
- | < | ||
- | $ echo $SCRATCH | ||
- | /scratch | ||
- | </ | ||
- | These directories are purged after job execution. | ||
==== Usage Local Scratch RAM Disk $TMPDIR ==== | ==== Usage Local Scratch RAM Disk $TMPDIR ==== | ||
- | ' | + | '' |
< | < | ||
Line 57: | Line 35: | ||
</ | </ | ||
- | \\ | ||
- | ==== Comparison of scratch directories ==== | ||
- | |||
- | | || $GLOBAL | ||
- | | Recommended file size || large || large || small || | ||
- | | Lifetime | ||
- | | Size || x00 TB (for all users) | ||
- | | Scaling | ||
- | | Visibility | ||
- | | Recommended usage || large files, available temporarily after job life || large files || many small files (>1000, or many seek-operations within a file || | ||
\\ | \\ | ||
Line 74: | Line 42: | ||
Storage extensions can be requested through [[https:// | Storage extensions can be requested through [[https:// | ||
- | |||
- | Since 2017-06-02 quotas are enforced for the $GLOBAL filesystem. Additional information can be found [[vsc3_global_quotas|here]] | ||
==== Fair Use ==== | ==== Fair Use ==== | ||
Line 93: | Line 59: | ||
VSC-3 NFS and BeeGFS (former FhGFS) servers utilize RAID-6 that can sustain up to 2 disks failing concurrently. The data path is otherwise not redundant. Data loss may also occur due to failure modes including, but not limited to natural disaster, cooling failure, disk controller failure and filesystem software faults. | VSC-3 NFS and BeeGFS (former FhGFS) servers utilize RAID-6 that can sustain up to 2 disks failing concurrently. The data path is otherwise not redundant. Data loss may also occur due to failure modes including, but not limited to natural disaster, cooling failure, disk controller failure and filesystem software faults. | ||
- | User data on VSC-2 [[doku: | + | User data on VSC-3 [[doku: |