Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:comsol [2021/10/22 09:46] – [Possible IO-Error] irdoku:comsol [2024/04/03 07:55] – [Job script] amelic
Line 1: Line 1:
 ====== COMSOL ====== ====== COMSOL ======
  
-The following case is provided here including the directories-structure\\ +The following case is provided here including the directories-structure and the appropriate batch-file: {{ :doku:karman.rar |}} 
-and the appropriate batch-file: {{ :doku:karman.rar |}}+ solely for the purpose of demonstrating how to use Comsol on a cluster.\\
  
 ===== Module ===== ===== Module =====
Line 9: Line 9:
  
 <code> <code>
-module avail 2>&1 | grep -i comsol+module avail Comsol
 </code> </code>
-Currently on VSC-4, these versions can be loaded:+Currently on VSC-4 and VSC-5, these versions can be loaded:
   * Comsol/5.5   * Comsol/5.5
   * Comsol/5.6   * Comsol/5.6
 +  * Comsol/6.1
  
 <code> <code>
Line 36: Line 37:
  
 #SBATCH --nodes=1 #SBATCH --nodes=1
-#SBATCH --ntasks-per-node=24+#SBATCH --ntasks-per-node=4
 #SBATCH --job-name="karman" #SBATCH --job-name="karman"
-#SBATCH --partition=mem_0384 +#SBATCH --partition=zen3_0512 
-#SBATCH --qos=mem_0384+#SBATCH --qos=zen3_0512_devel 
 + 
 +export I_MPI_PIN_RESPECT_CPUSET=0 
 +export I_MPI_PIN_PROCESSOR_LIST=0-3
  
 module purge module purge
-module load Comsol/5.6+module load intel-mpi/2021.5.0  
 +module load Comsol/6.1
  
 MODELTOCOMPUTE="karman" MODELTOCOMPUTE="karman"
Line 59: Line 64:
 echo "and the usual slurm...out" echo "and the usual slurm...out"
  
-COMSOL's internal command for number of nodes -nn and so on (-np-nnhost, ...) are deduced from SLURM +Example command for VSC5 
-comsol batch -mpibootstrap slurm -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -batchlog ${BATCHLOG} -alivetime 15 -recover -mpidebug 10+ 
 +comsol -mpi intel -np -nn 4 batch slurm -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -batchlog ${BATCHLOG} -alivetime 600 
 + 
 </code> </code>
  
Line 79: Line 87:
 ==== Using a shared node ==== ==== Using a shared node ====
  
-If your case isn't that demanding on hardware and you are interested in fast solutionit is possible to use one of the shared nodes. These are non-exclusive nodes, thus more than just one job is able to use the provided hardware. +If your case isn't that demanding concerning hardware resources, i.e. your job does not need the resources of full VSC-4 node with 48 coresthen make use of one of the shared nodes. These are non-exclusive nodes, thusmore than just one job can run at the same time on the provided hardware. 
-On these nodes you have to tell SLURM, how much memory (RAM) your case would need. This value should be less than the maximum of 96GB these nodes uses. Otherwise your job needs a whole node anyway.+ 
 +On these nodes you have to tell SLURM, **how much memory (RAM)** your case needs. This value should be less than the maximum memory of these nodes which is 96GB. Otherwise your job needs a whole nodeanyway.
 Here we use --mem=20G, to dedicate 20GB of memory. Here we use --mem=20G, to dedicate 20GB of memory.
  
Line 90: Line 99:
 #SBATCH --ntasks-per-node=1 #SBATCH --ntasks-per-node=1
 #SBATCH --job-name="clustsw" #SBATCH --job-name="clustsw"
-#SBATCH --qos=mem_0096+#SBATCH --qos=skylake_0096
 #SBATCH --mem=20G #SBATCH --mem=20G
  
  • doku/comsol.txt
  • Last modified: 2024/04/25 09:42
  • by amelic