Approvals: 0/1The Previously approved version (2017/09/01 12:56) is available.
Memory intensive Jobs
Jobs requiring more than 2 GB per process have several alternatives. They might use
- a node with more memory,
- a parallel environment with fewer processes per node,
- increased virtual memory, or
- swap space (still experimental).
ad 1. node with more memory
The free space on the nodes of a job can be listed, e.g. by
qstat -F mem_free|grep -B 2 <job-id>
256 GB per node
there are 2 nodes with 64 cores and 256 GB of memory each which are accessible via the highmem.q
. Specify in your job script:
#$ -q highmem.q
64 GB and 128 GB per node
several nodes on the VSC-2 have 64 GB or more instead of the 32 GB of the standard nodes.
To use one of these nodes, just add '-l mem_free=50G
' or '-l mem_free=100G
' to qsub
.
See also memory usage of running jobs!
ad 2. parallel environment with fewer processes per node
ad 3. increased virtual memory
Some programs allocate more memory than they use. This was especially true in old FORTRAN 77 programs, which had to decide at compile time how much memory will be used. These programs are allowed to allocate 50% more memory than available by #$ -l overcommit_mem=true
in the job script. Unfortunately it might happen that the whole node crashes, reboots and leaves the queuing system, so please use this option wisely!
ad 4. swap space (still experimental)
A novel feature of the VSC-2 is remote swap space (implemented using 'SCSI RDMA Protocol', SRP), which is used by specifying '#$ -l swapsize_GB=32
' or a multiple of 32. Each node of a job gets the same amount of swap space.