$HOME
$GLOBAL
$BINFS
/ $BINFL
$HOME
$TMPDIR
$SCRATCH
$GLOBAL
$BINFS
$BINFL
$HOME
environment variable$GLOBAL
and $SCRATCH
environment variables$GLOBAL
… /global/lv70XXX/username$SCRATCH
… /scratchbeegfs-ctl --getquota --cfgFile=/etc/beegfs/global3.d/beegfs-client.conf --gid 70XXX
VSC-3 > beegfs-ctl --getquota --cfgFile=/etc/beegfs/global3.d/beegfs-client.conf --gid 70824 user/group || size || chunk files name | id || used | hard || used | hard --------------|------||------------|------------||---------|--------- p70824| 70824|| 0 Byte| 500.00 GiB|| 0| 100000
$BINFL
environment variable$BINFL
… /binfl/lv70XXX/usernamebeegfs-ctl --getquota --cfgFile=/etc/beegfs/hdd_storage.d/beegfs-client.conf --gid 70XXX
VSC-3 > beegfs-ctl --getquota --cfgFile=/etc/beegfs/hdd_storage.d/beegfs-client.conf --gid 70824 user/group || size || chunk files name | id || used | hard || used | hard --------------|------||------------|------------||---------|--------- p70824| 70824|| 5.93 MiB| 10.00 GiB|| 574| 1000000
$BINFS
environment variable$BINFS
… /binfs/lv70XXX/usernamebeegfs-ctl --getquota --cfgFile=/etc/beegfs/nvme_storage.d/beegfs-client.conf --gid 70XXX
VSC-3 > beegfs-ctl --getquota --cfgFile=/etc/beegfs/nvme_storage.d/beegfs-client.conf --gid 70824 user/group || size || chunk files name | id || used | hard || used | hard --------------|------||------------|------------||---------|--------- p70824| 70824|| 0 Byte| 2.00 GiB|| 0| 2000
$HOME
or $GLOBAL
$TMPDIR
environment variable$HOME
$DATA
$HOME
environment variable (VSC-4)mmlsquota --block-size auto -j home_fs70XXX home
VSC-4 > mmlsquota --block-size auto -j home_fs70824 home Block Limits | File Limits Filesystem type blocks quota limit in_doubt grace | files quota limit in_doubt home FILESET 63.7M 100G 100G 0 none | 3822 1000000 1000000 0
$DATA
environment variable (VSC-4)mmlsquota --block-size auto -j data_fs70XXX data
VSC-4 > mmlsquota --block-size auto -j data_fs70824 data Block Limits | File Limits Filesystem type blocks quota limit in_doubt grace | files quota limit in_doubt data FILESET 0 9.766T 9.766T 0 none | 14 1000000 1000000 0
$HOME
(VSC-3)$HOME
(VSC-4)$DATA
(VSC-4)In these exercises we try to measure the performance of the different storage targets on VSC-3. For that we will use the “IOR” application (https://github.com/LLNL/ior) which is a standard benchmark for distributed storage systems.
“IOR” for these exercises has been built with gcc-4.9 and openmpi-1.10.2 so load these 2 modules first:
module purge module load gcc/4.9 openmpi/1.10.2
Now extract the storage exercises to your own Folder.
mkdir my_directory_name cd my_directory_name cp -r ~training/examples/08_storage_infrastructure/*Benchmark ./
Keep in mind that the results will vary, because there are other users working on the storage targets.
We will now measure the sequential performance of the different storage targets on VSC-3.
cd 01_SequentialStorageBenchmark # Submit the job sbatch 01a_one_process_per_target.slrm # Inspect corresponding slurm-*.out files
<HTML><ol start=“2” style=“list-style-type: lower-alpha;”></HTML> <HTML><li></HTML>With 8 processes<HTML></li></HTML><HTML></ol></HTML>
# Submit the job sbatch 01b_eight_processes_per_target.slrm # Inspect corresponding slurm-*.out files
Take your time and compare the outputs of the 2 different runs. What conclusions can be drawn for the storage targets on VSC-3?
Discuss the following questions with your partner:
$TMPDIR
seems to scale pretty well with the number of processes although it is an in-memory filesystem. Why is that happening?Exercise 1a:
HOME: Max Write: 237.37 MiB/sec (248.91 MB/sec) GLOBAL: Max Write: 925.64 MiB/sec (970.60 MB/sec) BINFL: Max Write: 1859.69 MiB/sec (1950.03 MB/sec) BINFS: Max Write: 1065.61 MiB/sec (1117.37 MB/sec) TMP: Max Write: 2414.70 MiB/sec (2531.99 MB/sec)
Exercise 1b:
HOME: Max Write: 371.76 MiB/sec (389.82 MB/sec) GLOBAL: Max Write: 2195.28 MiB/sec (2301.91 MB/sec) BINFL: Max Write: 2895.24 MiB/sec (3035.88 MB/sec) BINFS: Max Write: 2950.23 MiB/sec (3093.54 MB/sec) TMP: Max Write: 16764.76 MiB/sec (17579.12 MB/sec)
We will now measure the storage performance for tiny 4kilobyte random writes.
cd 02_RandomioStorageBenchmark # Submit the job sbatch 02a_one_process_per_target.slrm # Inspect corresponding slurm-*.out files
<HTML><ol start=“2” style=“list-style-type: lower-alpha;”></HTML> <HTML><li></HTML>With 8 processes<HTML></li></HTML><HTML></ol></HTML>
# Submit the job sbatch 02b_eight_processes_per_target.slrm # Inspect corresponding slurm-*.out files
Take your time and compare the outputs of the 2 different runs. Do additional processes speed up the I/O activity?
Now compare your Results to the sequential run in exercise 1. What can be concluded for random I/O versus sequential I/O on the VSC-3 storage targets?
Discuss the following questions with your partner:
$GLOBAL
. How could you speed up your application?Exercise 2a:
HOME: Max Write: 216.06 MiB/sec (226.56 MB/sec) GLOBAL: Max Write: 56.13 MiB/sec (58.86 MB/sec) BINFL: Max Write: 42.70 MiB/sec (44.77 MB/sec) BINFS: Max Write: 41.39 MiB/sec (43.40 MB/sec) TMP: Max Write: 1428.41 MiB/sec (1497.80 MB/sec)
Exercise 2b:
HOME: Max Write: 249.11 MiB/sec (261.21 MB/sec) GLOBAL: Max Write: 235.51 MiB/sec (246.95 MB/sec) BINFL: Max Write: 414.46 MiB/sec (434.59 MB/sec) BINFS: Max Write: 431.59 MiB/sec (452.55 MB/sec) TMP: Max Write: 10551.71 MiB/sec (11064.27 MB/sec)