Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:cfd [2021/07/23 08:40] – [Job script] sfrankdoku:cfd [2021/10/22 06:11] – [Using a shared node] sfrank
Line 4: Line 4:
   * [[https://www.ansys.com/products/fluids/ansys-fluent|ANSYS CFD]],    * [[https://www.ansys.com/products/fluids/ansys-fluent|ANSYS CFD]], 
   * [[https://www.openfoam.com|OpenFOAM]]   * [[https://www.openfoam.com|OpenFOAM]]
 +  * [[https://www.comsol.de/|COMSOL]]
  
 ====== ANSYS-Fluent (CFD) ====== ====== ANSYS-Fluent (CFD) ======
Line 233: Line 234:
  
 ====== COMSOL ====== ====== COMSOL ======
 +
 +The following case is provided here including the directories-structure\\
 +and the appropriate batch-file: {{ :doku:karman.rar |}}
  
 ===== Module ===== ===== Module =====
Line 292: Line 296:
 comsol batch -mpibootstrap slurm -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -batchlog ${BATCHLOG} -alivetime 15 -recover -mpidebug 10 comsol batch -mpibootstrap slurm -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -batchlog ${BATCHLOG} -alivetime 15 -recover -mpidebug 10
 </code> </code>
 +
 +==== Possible IO-Error ====
 +
 +COMSOL is generating a huge amount of temporary files during the calculation. These files in general got saved in ''$HOME'' and then this error will be occuring. To avoid it, you have to change the path of ''$TMPDIR'' to e.g. /local. So the temporary files will be stored on the SSD-storage local to the compute node.
 +To get rid of this error just expand the comsol command in the job script by the following option:
 +<code>
 +-tmpdir "/local"
 +</code>
 +
 +===== Submit job =====
 +
 +<code>
 +sbatch karman.job
 +</code>
 +
 +==== Using a shared node ====
 +
 +If your case isn't that demanding on hardware and you are interested in a fast solution, it is possible to use one of the shared nodes. These are non-exclusive nodes, thus more than just one job is able to use the provided hardware.
 +On these nodes you have to tell SLURM, how much memory (RAM) your case would need. This value should be less than the maximum of 96GB these nodes uses. Otherwise your job needs a whole node anyway.
 +Here we use --mem=20G, to dedicate 20GB of memory.
 +
 +<code>
 +#!/bin/bash
 +# slurmsubmit.sh
 +
 +#SBATCH -n 1
 +#SBATCH --ntasks-per-node=1
 +#SBATCH --job-name="clustsw"
 +#SBATCH --qos=mem_0096
 +#SBATCH --mem=20G
 +
 +hostname
 +
 +module purge
 +module load Comsol/5.6
 +module list
 +.
 +.
 +.
 +</code>
 +
 +
 +----
 +
 +
  
 {{ :doku:karman3.gif?nolink |}} {{ :doku:karman3.gif?nolink |}}
 +
 +
 +----
 +
 +
  
  • doku/cfd.txt
  • Last modified: 2023/12/23 09:58
  • by amelic