This version is outdated by a newer approved version.DiffThis version (2023/05/17 14:10) was approved by msiegel.The Previously approved version (2021/10/22 09:51) is available.Diff

This is an old revision of the document!


COMSOL

The following case is provided here including the directories-structure
and the appropriate batch-file: karman.rar

Available version of Comsol can be found by executing the following line:

module avail Comsol

Currently on VSC-4, these versions can be loaded:

  • Comsol/5.5
  • Comsol/5.6
  • Comsol/6.1
module load *your preferred module*

In general you define your complete case on your local machine and save it as *.mph file.
This file contains all necessary information to run a successfull calculation on the cluster.


An example of a Job script is shown below.

#!/bin/bash
# slurmsubmit.sh

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --job-name="karman"
#SBATCH --partition=skylake_0384
#SBATCH --qos=skylake_0384

module purge
module load Comsol/5.6

MODELTOCOMPUTE="karman"
path=$(pwd)

INPUTFILE="${path}/${MODELTOCOMPUTE}.mph"
OUTPUTFILE="${path}/${MODELTOCOMPUTE}_result.mph"
BATCHLOG="${path}/${MODELTOCOMPUTE}.log"

echo "reading the inputfile"
echo $INPUTFILE
echo "writing the resultfile to"
echo $OUTPUTFILE
echo "COMSOL logs written to"
echo $BATCHLOG
echo "and the usual slurm...out"

# COMSOL's internal command for number of nodes -nn and so on (-np, -nnhost, ...) are deduced from SLURM
comsol batch -mpibootstrap slurm -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -batchlog ${BATCHLOG} -alivetime 15 -recover -mpidebug 10

COMSOL is generating a huge amount of temporary files during the calculation. These files in general got saved in $HOME and then this error will be occuring. To avoid it, you have to change the path of $TMPDIR to e.g. /local. So the temporary files will be stored on the SSD-storage local to the compute node. To get rid of this error just augment the comsol command in the job script by the following option:

-tmpdir "/local"
sbatch karman.job

If your case isn't that demanding concerning hardware resources, i.e. your job does not need the resources of a full VSC-4 node with 48 cores, then make use of one of the shared nodes. These are non-exclusive nodes, thus, more than just one job can run at the same time on the provided hardware.

On these nodes you have to tell SLURM, how much memory (RAM) your case needs. This value should be less than the maximum memory of these nodes which is 96GB. Otherwise your job needs a whole node, anyway. Here we use –mem=20G, to dedicate 20GB of memory.

#!/bin/bash
# slurmsubmit.sh

#SBATCH -n 1
#SBATCH --ntasks-per-node=1
#SBATCH --job-name="clustsw"
#SBATCH --qos=skylake_0096
#SBATCH --mem=20G

hostname

module purge
module load Comsol/5.6
module list
.
.
.


  • doku/comsol.1677098599.txt.gz
  • Last modified: 2023/02/22 20:43
  • by goldenberg