COMSOL

The following case is provided here including the directories-structure
and the appropriate batch-file: karman.rar

Available version of Comsol can be found by executing the following line:

module avail Comsol

Currently on VSC-4 and VSC-5, these versions can be loaded:

  • Comsol/5.5
  • Comsol/5.6
  • Comsol/6.1
module load *your preferred module*

In general you define your complete case on your local machine and save it as *.mph file.
This file contains all necessary information to run a successfull calculation on the cluster.


An example of a Job script is shown below.

#!/bin/bash
# slurmsubmit.sh

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --job-name="karman"
#SBATCH --partition=zen3_0512
#SBATCH --qos=zen3_0512_devel

export I_MPI_PIN_RESPECT_CPUSET=0
export I_MPI_PIN_PROCESSOR_LIST=0-3

module purge
module load intel-mpi/2021.5.0 
module load Comsol/6.1

MODELTOCOMPUTE="karman"
path=$(pwd)

INPUTFILE="${path}/${MODELTOCOMPUTE}.mph"
OUTPUTFILE="${path}/${MODELTOCOMPUTE}_result.mph"
BATCHLOG="${path}/${MODELTOCOMPUTE}.log"

echo "reading the inputfile"
echo $INPUTFILE
echo "writing the resultfile to"
echo $OUTPUTFILE
echo "COMSOL logs written to"
echo $BATCHLOG
echo "and the usual slurm...out"

# Example command for VSC5

comsol -mpi intel -np 4 -nn 4 batch slurm -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -batchlog ${BATCHLOG} -alivetime 600

More information about Comsol GUI applications and Comsol batch jobs here: TUCOLAB

COMSOL is generating a huge amount of temporary files during the calculation. These files in general got saved in $HOME and then this error will be occuring. To avoid it, you have to change the path of $TMPDIR to e.g. /local. So the temporary files will be stored on the SSD-storage local to the compute node. To get rid of this error just augment the comsol command in the job script by the following option:

-tmpdir "/local"
sbatch karman.job

If your case isn't that demanding concerning hardware resources, i.e. your job does not need the resources of a full VSC-4 node with 48 cores, then make use of one of the shared nodes. These are non-exclusive nodes, thus, more than just one job can run at the same time on the provided hardware.

On these nodes you have to tell SLURM, how much memory (RAM) your case needs. This value should be less than the maximum memory of these nodes which is 96GB. Otherwise your job needs a whole node, anyway. Here we use –mem=20G, to dedicate 20GB of memory.

#!/bin/bash
# slurmsubmit.sh

#SBATCH -n 1
#SBATCH --ntasks-per-node=1
#SBATCH --job-name="clustsw"
#SBATCH --qos=skylake_0096
#SBATCH --mem=20G

hostname

module purge
module load Comsol/5.6
module list
.
.
.


  • doku/comsol.txt
  • Last modified: 2024/01/17 11:42
  • by amelic