# VSC Wiki

## Systems

### VSC 3

The original VSC3 has been decomissioned. The extension VSC3+, the Bioinformatics Nodes and the GPU Nodes are still operational

Decomissioned

Decomissioned

doku:comsol

# COMSOL

The following case is provided here including the directories-structure
and the appropriate batch-file: karman.rar

## Module

Available version of Comsol can be found by executing the following line:

module avail 2>&1 | grep -i comsol

Currently on VSC-4, these versions can be loaded:

• Comsol/5.5
• Comsol/5.6
module load *your preferred module*

## Workflow

In general you define your complete case on your local machine and save it as *.mph file.
This file contains all necessary information to run a successfull calculation on the cluster.

## Job script

An example of a Job script is shown below.

#!/bin/bash
# slurmsubmit.sh

#SBATCH --nodes=1
#SBATCH --job-name="karman"
#SBATCH --partition=mem_0384
#SBATCH --qos=mem_0384

module purge

MODELTOCOMPUTE="karman"
path=$(pwd) INPUTFILE="${path}/${MODELTOCOMPUTE}.mph" OUTPUTFILE="${path}/${MODELTOCOMPUTE}_result.mph" BATCHLOG="${path}/${MODELTOCOMPUTE}.log" echo "reading the inputfile" echo$INPUTFILE
echo "writing the resultfile to"
echo $OUTPUTFILE echo "COMSOL logs written to" echo$BATCHLOG
echo "and the usual slurm...out"

# COMSOL's internal command for number of nodes -nn and so on (-np, -nnhost, ...) are deduced from SLURM
comsol batch -mpibootstrap slurm -inputfile ${INPUTFILE} -outputfile${OUTPUTFILE} -batchlog ${BATCHLOG} -alivetime 15 -recover -mpidebug 10 ### Possible IO-Error COMSOL is generating a huge amount of temporary files during the calculation. These files in general got saved in $HOME and then this error will be occuring. To avoid it, you have to change the path of \$TMPDIR to e.g. /local. So the temporary files will be stored on the SSD-storage local to the compute node. To get rid of this error just augment the comsol command in the job script by the following option:

-tmpdir "/local"

## Submit job

sbatch karman.job

### Using a shared node

If your case isn't that demanding concerning hardware resources, i.e. your job does not need the resources of a full VSC-4 node with 48 cores, then make use of one of the shared nodes. These are non-exclusive nodes, thus, more than just one job can run at the same time on the provided hardware.

On these nodes you have to tell SLURM, how much memory (RAM) your case needs. This value should be less than the maximum memory of these nodes which is 96GB. Otherwise your job needs a whole node, anyway. Here we use –mem=20G, to dedicate 20GB of memory.

#!/bin/bash
# slurmsubmit.sh

#SBATCH -n 1
#SBATCH --job-name="clustsw"
#SBATCH --qos=mem_0096
#SBATCH --mem=20G

hostname

module purge
.