Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revisionLast revisionBoth sides next revision | ||
doku:comsol [2023/05/30 08:45] – [Job script] amelic | doku:comsol [2024/04/25 09:08] – [Job script] amelic | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== COMSOL ====== | ====== COMSOL ====== | ||
+ | More information about Comsol GUI applications and Comsol batch jobs here: [[https:// | ||
- | The following case is provided here including | + | The following case is provided here including the appropriate batch-file: |
- | and the appropriate batch-file: {{ : | + | {{ : |
+ | |||
+ | Solely for the purpose of demonstrating how to use Comsol on a cluster.\\ | ||
===== Module ===== | ===== Module ===== | ||
Line 11: | Line 14: | ||
module avail Comsol | module avail Comsol | ||
</ | </ | ||
- | Currently on VSC-4, these versions can be loaded: | + | Currently on VSC-4 and VSC-5, these versions can be loaded: |
* Comsol/5.5 | * Comsol/5.5 | ||
* Comsol/5.6 | * Comsol/5.6 | ||
Line 24: | Line 27: | ||
===== Workflow ===== | ===== Workflow ===== | ||
- | In general | + | Typically, |
---- | ---- | ||
===== Job script ===== | ===== Job script ===== | ||
- | An example of a Job script is shown below. | + | An example of a job script is provided |
< | < | ||
#!/bin/bash | #!/bin/bash | ||
# slurmsubmit.sh | # slurmsubmit.sh | ||
+ | ##Example for VSC5 | ||
#SBATCH --nodes=1 | #SBATCH --nodes=1 | ||
- | #SBATCH --ntasks-per-node=24 | + | #SBATCH --ntasks-per-node=4 |
#SBATCH --job-name=" | #SBATCH --job-name=" | ||
#SBATCH --partition=zen3_0512 | #SBATCH --partition=zen3_0512 | ||
#SBATCH --qos=zen3_0512_devel | #SBATCH --qos=zen3_0512_devel | ||
+ | |||
+ | export I_MPI_PIN_RESPECT_CPUSET=0 | ||
+ | export I_MPI_PIN_PROCESSOR_LIST=0-3 | ||
module purge | module purge | ||
- | module load Comsol/5.6 | + | module load intel-mpi/2021.5.0 |
+ | module load Comsol/6.1 | ||
- | MODELTOCOMPUTE=" | ||
- | path=$(pwd) | ||
- | INPUTFILE=" | + | INPUTFILE=" |
- | OUTPUTFILE=" | + | OUTPUTFILE=" |
- | BATCHLOG=" | + | BATCHLOG=" |
- | echo " | ||
- | echo $INPUTFILE | ||
- | echo " | ||
- | echo $OUTPUTFILE | ||
- | echo " | ||
- | echo $BATCHLOG | ||
- | echo "and the usual slurm...out" | ||
- | # Example command line for VSC5 | + | comsol -mpi intel -np 4 -nn 4 batch slurm -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -batchlog ${BATCHLOG} -alivetime |
- | # COMSOL' | + | |
- | comsol | + | |
- | # More information about COMSOL Batch jobs in TUCOLAB INTERACTIVE NODES... | ||
</ | </ | ||
+ | |||
==== Possible IO-Error ==== | ==== Possible IO-Error ==== | ||
Line 78: | Line 73: | ||
< | < | ||
sbatch karman.job | sbatch karman.job | ||
- | </ | ||
- | |||
- | ==== Using a shared node ==== | ||
- | |||
- | If your case isn't that demanding concerning hardware resources, i.e. your job does not need the resources of a full VSC-4 node with 48 cores, then make use of one of the shared nodes. These are non-exclusive nodes, thus, more than just one job can run at the same time on the provided hardware. | ||
- | |||
- | On these nodes you have to tell SLURM, **how much memory (RAM)** your case needs. This value should be less than the maximum memory of these nodes which is 96GB. Otherwise your job needs a whole node, anyway. | ||
- | Here we use --mem=20G, to dedicate 20GB of memory. | ||
- | |||
- | < | ||
- | #!/bin/bash | ||
- | # slurmsubmit.sh | ||
- | |||
- | #SBATCH -n 1 | ||
- | #SBATCH --ntasks-per-node=1 | ||
- | #SBATCH --job-name=" | ||
- | #SBATCH --qos=skylake_0096 | ||
- | #SBATCH --mem=20G | ||
- | |||
- | hostname | ||
- | |||
- | module purge | ||
- | module load Comsol/5.6 | ||
- | module list | ||
- | . | ||
- | . | ||
- | . | ||
</ | </ | ||
---- | ---- | ||
- | + | ===== Result ===== | |
{{ : | {{ : |