Memory intensive Jobs

Jobs requiring more than 2 GB per process have several alternatives. They might use

  1. a node with more memory,
  2. a parallel environment with fewer processes per node,
  3. increased virtual memory, or
  4. swap space (still experimental).

The free space on the nodes of a job can be listed, e.g. by

qstat -F mem_free|grep -B 2 <job-id>
256 GB per node

there are 2 nodes with 64 cores and 256 GB of memory each which are accessible via the highmem.q. Specify in your job script:

#$ -q highmem.q
64 GB and 128 GB per node

several nodes on the VSC-2 have 64 GB or more instead of the 32 GB of the standard nodes. To use one of these nodes, just add '-l mem_free=50G' or '-l mem_free=100G' to qsub.

See also memory usage of running jobs!

Some programs allocate more memory than they use. This was especially true in old FORTRAN 77 programs, which had to decide at compile time how much memory will be used. These programs are allowed to allocate 50% more memory than available by #$ -l overcommit_mem=true in the job script. Unfortunately it might happen that the whole node crashes, reboots and leaves the queuing system, so please use this option wisely!

A novel feature of the VSC-2 is remote swap space (implemented using 'SCSI RDMA Protocol', SRP), which is used by specifying '#$ -l swapsize_GB=32' or a multiple of 32. Each node of a job gets the same amount of swap space.

  • doku/memory.txt
  • Last modified: 2015/09/17 12:32
  • by jz