Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
doku:cfd [2021/07/30 07:15] – [Job script] sfrank | doku:cfd [2021/10/22 06:11] – [Using a shared node] sfrank | ||
---|---|---|---|
Line 4: | Line 4: | ||
* [[https:// | * [[https:// | ||
* [[https:// | * [[https:// | ||
+ | * [[https:// | ||
====== ANSYS-Fluent (CFD) ====== | ====== ANSYS-Fluent (CFD) ====== | ||
Line 233: | Line 234: | ||
====== COMSOL ====== | ====== COMSOL ====== | ||
+ | |||
+ | The following case is provided here including the directories-structure\\ | ||
+ | and the appropriate batch-file: {{ : | ||
===== Module ===== | ===== Module ===== | ||
Line 295: | Line 299: | ||
==== Possible IO-Error ==== | ==== Possible IO-Error ==== | ||
- | COMSOL is generating a huge amount of temporary files during the calculation. These files in general got saved in $HOME and then this error will be occuring. To avoid it, you have to change the path of $TMPDIR to e.g. /local. So the temporary files will be stored on the SSD-storage local to the compute node. | + | COMSOL is generating a huge amount of temporary files during the calculation. These files in general got saved in '' |
To get rid of this error just expand the comsol command in the job script by the following option: | To get rid of this error just expand the comsol command in the job script by the following option: | ||
< | < | ||
-tmpdir "/ | -tmpdir "/ | ||
</ | </ | ||
+ | |||
+ | ===== Submit job ===== | ||
+ | |||
+ | < | ||
+ | sbatch karman.job | ||
+ | </ | ||
+ | |||
+ | ==== Using a shared node ==== | ||
+ | |||
+ | If your case isn't that demanding on hardware and you are interested in a fast solution, it is possible to use one of the shared nodes. These are non-exclusive nodes, thus more than just one job is able to use the provided hardware. | ||
+ | On these nodes you have to tell SLURM, how much memory (RAM) your case would need. This value should be less than the maximum of 96GB these nodes uses. Otherwise your job needs a whole node anyway. | ||
+ | Here we use < | ||
+ | |||
+ | < | ||
+ | #!/bin/bash | ||
+ | # slurmsubmit.sh | ||
+ | |||
+ | #SBATCH -n 1 | ||
+ | #SBATCH --ntasks-per-node=1 | ||
+ | #SBATCH --job-name=" | ||
+ | #SBATCH --qos=mem_0096 | ||
+ | #SBATCH --mem=20G | ||
+ | |||
+ | hostname | ||
+ | |||
+ | module purge | ||
+ | module load Comsol/5.6 | ||
+ | module list | ||
+ | . | ||
+ | . | ||
+ | . | ||
+ | </ | ||
+ | |||
+ | |||
+ | ---- | ||
+ | |||
+ | |||
{{ : | {{ : | ||
+ | |||
+ | |||
+ | ---- | ||
+ | |||
+ | |||