Software & Services
This article is about the
$HOME filesystems of VSC-3. If you are searching for info about the bioinformatics storage, the article can be found here.
VSC-3 provides three facilities for persisting data: the high-performance BeeGFS Parallel Filesystem (former Fraunhofer Parallel Filesystem, FhGFS), the Network File System (NFS) and a node-local ramdisk. They are accessible under:
$HOMEwhich expands to /home/lv<project>/<username>
$GLOBALexpands to /fhgfs/global/lv<project>/<username>,
$SCRATCHto /fhgfs/<node> (node local)
$HOME is the location of the user UNIX home directory. It can be accessed from login and compute nodes. $HOME can be used to hold results, settings, source code etc. - data for which high concurrent job throughput and support for large file sizes is not required. Conversely, the parallel BeeGFS filesystem (see below) should utilized to persist temporary data in compute runs.
$HOME is user responsibility.
$HOME is provided from file servers with disk arrays that are exported over the network file system (NFS). Even on highly scaled storage such on VSC-3, the number of concurrent file operations is bound by spinning disk physics: small file (write) operations can easily saturate capacity. Hence, please mind that
$HOME is a shared resource over all projects on a given NFS server. In case your project requires persistence over a large number of small files please contact VSC administration in advance.
The BeeGFS parallel file system (former FhGFS) on VSC-3 provides a large (initially approx. 0.5 PB) scratch space. The environment variable
$GLOBAL expands to:
$ echo $GLOBAL /global/lv70999/username
The directory is writeable as user and readable by the group members. It is advisable to make use of these directories in particular for jobs with heavy I/O operations. In addition it will reduce the load on the fileserver holding the $HOME directories.
The BeeGFS (former Fraunhofer parallel file system) is shared by all users and by all nodes. Single jobs producing heavy load (»1000 requests per second) have been observed to reduce responsiveness for all jobs and all users.
Lifetime of data is limited, see table below.
Local scratch directories on each node are provided as a link to the BeeGFS parallel file system and can thus be viewed also via the login nodes as
The parallel file system (and thus the performance) is identical between $SCRATCH and $GLOBAL.
$SCRATCH expands to:
$ echo $SCRATCH /scratch
These directories are purged after job execution.
$TMPDIR provides a small ephermal-volatile RAM disk of 50% node RAM, e.g. 32GB for a 64GB node. It suits very fast local access that is restricted to single nodes, especially for many small files. The RAM disk does not explicitly have to be requested in jobs and grows with file contents - subtracting its usage from available memory. The variable
$TMPDIR expands to /tmp. Please do not hardcode /tmp directly. Directories in
$TMPDIR are purged after job execution.
$ echo $TMP -- $TMPDIR /tmp/123456.789.queue.q -- /tmp/123456.789.queue.q
| || ||
|Recommended file size||large||large||small|
|Lifetime||files older than 90 days deleted if $GLOBAL space is running low||job||job|
|Size||x00 TB (for all users)||x00 TB (for all users)||a few GB (within memory)|
|Scaling||does not fit very large number of small file IO||does not fit very large number of small file IO||very good (local)|
|Visibility||global||node (see above)||node|
|Recommended usage||large files, available temporarily after job life||large files||many small files (>1000, or many seek-operations within a file|
Disk quotas are set per project. Users within a project share the quota.
Storage extensions can be requested through Vergabeassistent at Extensions - Storage.
Since 2017-06-02 quotas are enforced for the
$GLOBAL filesystem. Additional information can be found here
The storage resources underlying NFS and BeeGFS (former FhGFS) are shared. Please utilize BeeGFS primarily for large I/O intensive runs. The number of files per run or per project is not hard limited. Yet, it is strongly discouraged to create/operate on O(10E5) and above number of files. If millions of (small) files are required for a code, please contact system operation in advance as performance impact on other users can occur.
Parallel filesystems used in large scale computing are unlike desktop file systems. Contact VSC staff in planning for high I/O computation. Also, VSC can support architecting one-time and recurrent large ingress-egress data pipelines, recurrent large data transfer workflows, and support optimizing codes for parallel I/O.
Backup of user files independent of location is solely the responsibility of each user.
VSC-3 NFS and BeeGFS (former FhGFS) servers utilize RAID-6 that can sustain up to 2 disks failing concurrently. The data path is otherwise not redundant. Data loss may also occur due to failure modes including, but not limited to natural disaster, cooling failure, disk controller failure and filesystem software faults.
User data on VSC-2 is not backuped.