Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
doku:comsol [2023/07/05 10:06] amelicdoku:comsol [2024/04/25 09:42] (current) – [Job script] amelic
Line 1: Line 1:
 ====== COMSOL ====== ====== COMSOL ======
 +More information about Comsol GUI applications and Comsol batch jobs here: [[https://colab.tuwien.ac.at/display/IAVSC/Comsol|TUCOLAB- Comsol]] 
  
-The following case is provided here including the directories-structure\\ +The following case is provided here including the appropriate batch-file: 
-and the appropriate batch-file: {{ :doku:karman.rar |}}+{{ :doku:karman.zip |}} 
 + 
 +Solely for the purpose of demonstrating how to use Comsol on a cluster.\\
  
 ===== Module ===== ===== Module =====
Line 24: Line 27:
 ===== Workflow ===== ===== Workflow =====
  
-In general you define your complete case on your local machine and save it as *.mph file.\\ This file contains all necessary information to run a successfull calculation on the cluster. +Typically, you would define your entire case either on the interactive access (using noMachine) or on your local machineand then save it as *.mph file (you also save this identical file as an output file)We recommend using noMachine because both the interactive access and the cluster itself have the same software packages installed. The *.mph file encapsulates all the necessary information required to successfully run a calculation on the cluster. The *.mph file contains all the necessary information required to successfully execute a calculation on the cluster. The computation results on the cluster will then be saved to the identical output file.
 ---- ----
  
 ===== Job script ===== ===== Job script =====
  
-An example of a Job script is shown below.+An example of a job script is provided below.  
  
 <code> <code>
Line 37: Line 39:
  
 #SBATCH --nodes=1 #SBATCH --nodes=1
-#SBATCH --ntasks-per-node=10+#SBATCH --ntasks-per-node=4
 #SBATCH --job-name="karman" #SBATCH --job-name="karman"
-#SBATCH --partition=zen3_0512 +
-#SBATCH --qos=zen3_0512_devel+
  
 export I_MPI_PIN_RESPECT_CPUSET=0 export I_MPI_PIN_RESPECT_CPUSET=0
-export I_MPI_PIN_PROCESSOR_LIST=0-9+export I_MPI_PIN_PROCESSOR_LIST=0-3
  
 module purge module purge
Line 49: Line 50:
 module load Comsol/6.1 module load Comsol/6.1
  
-MODELTOCOMPUTE="karman" 
-path=$(pwd) 
  
-INPUTFILE="${path}/${MODELTOCOMPUTE}.mph" +INPUTFILE="karman.mph" 
-OUTPUTFILE="${path}/${MODELTOCOMPUTE}_result.mph" +OUTPUTFILE="karmanout.mph" 
-BATCHLOG="${path}/${MODELTOCOMPUTE}.log"+BATCHLOG="LOGFILE.log"
  
-echo "reading the inputfile" 
-echo $INPUTFILE 
-echo "writing the resultfile to" 
-echo $OUTPUTFILE 
-echo "COMSOL logs written to" 
-echo $BATCHLOG 
-echo "and the usual slurm...out" 
  
-# Example command for VSC5+comsol -mpi intel -np 4 -nn 4 batch slurm -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -batchlog ${BATCHLOG} -alivetime 600
  
-comsol -mpi intel -np 10 -nn 10 batch slurm -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -batchlog ${BATCHLOG} -alivetime 600+</code>
  
- 
-</code> 
-More information about Comsol GUI applications and Comsol batch jobs here: [[https://colab.tuwien.ac.at/display/IAVSC/Comsol|TUCOLAB]]  
 ==== Possible IO-Error ==== ==== Possible IO-Error ====
  
Line 83: Line 72:
 <code> <code>
 sbatch karman.job sbatch karman.job
-</code> 
- 
-==== Using a shared node ==== 
- 
-If your case isn't that demanding concerning hardware resources, i.e. your job does not need the resources of a full VSC-4 node with 48 cores, then make use of one of the shared nodes. These are non-exclusive nodes, thus, more than just one job can run at the same time on the provided hardware. 
- 
-On these nodes you have to tell SLURM, **how much memory (RAM)** your case needs. This value should be less than the maximum memory of these nodes which is 96GB. Otherwise your job needs a whole node, anyway. 
-Here we use --mem=20G, to dedicate 20GB of memory. 
- 
-<code> 
-#!/bin/bash 
-# slurmsubmit.sh 
- 
-#SBATCH -n 1 
-#SBATCH --ntasks-per-node=1 
-#SBATCH --job-name="clustsw" 
-#SBATCH --qos=skylake_0096 
-#SBATCH --mem=20G 
- 
-hostname 
- 
-module purge 
-module load Comsol/5.6 
-module list 
-. 
-. 
-. 
 </code> </code>
  
  
 ---- ----
- +===== Result =====
  
 {{ :doku:karman3.gif?nolink |}} {{ :doku:karman3.gif?nolink |}}
  • doku/comsol.1688551584.txt.gz
  • Last modified: 2023/07/05 10:06
  • by amelic