Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:cfd [2021/10/11 05:21] – [Submit job] sfrankdoku:cfd [2021/10/22 06:11] – [Using a shared node] sfrank
Line 308: Line 308:
  
 <code> <code>
-sbatch fluent_run.sh+sbatch karman.job
 </code> </code>
 +
 +==== Using a shared node ====
 +
 +If your case isn't that demanding on hardware and you are interested in a fast solution, it is possible to use one of the shared nodes. These are non-exclusive nodes, thus more than just one job is able to use the provided hardware.
 +On these nodes you have to tell SLURM, how much memory (RAM) your case would need. This value should be less than the maximum of 96GB these nodes uses. Otherwise your job needs a whole node anyway.
 +Here we use --mem=20G, to dedicate 20GB of memory.
 +
 +<code>
 +#!/bin/bash
 +# slurmsubmit.sh
 +
 +#SBATCH -n 1
 +#SBATCH --ntasks-per-node=1
 +#SBATCH --job-name="clustsw"
 +#SBATCH --qos=mem_0096
 +#SBATCH --mem=20G
 +
 +hostname
 +
 +module purge
 +module load Comsol/5.6
 +module list
 +.
 +.
 +.
 +</code>
 +
  
 ---- ----
Line 321: Line 348:
  
  
-|                            Parameter / Function Sweep    ^            Batch Sweep           ^ Cluster Sweep ^ 
-^ Loop                    | inner or outer                   | outer                            | outer  | 
-^ Settings in GUI         | Enable distibute parameter sweep | Set number of simultaneous jobs\\        || 
-^ Entries on command line | --mpibootstrap slurm             | --mode desktop                           || 
-^ Description             | MPI -synch.                      | next job drops in free slots             || 
  
  • doku/cfd.txt
  • Last modified: 2023/12/23 09:58
  • by amelic