Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
doku:cfd [2021/07/16 08:28] – [Job script] sfrank | doku:cfd [2022/12/20 14:33] – [Graphical User Interface] ir | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== | + | ====== |
* [[https:// | * [[https:// | ||
* [[https:// | * [[https:// | ||
* [[https:// | * [[https:// | ||
+ | * [[https:// | ||
- | ====== ANSYS-Fluent (CFD) ====== | ||
- | ===== Module | + | ====== Graphical User Interface ====== |
- | Check available versions of Ansys: | + | read here [[https://colab.tuwien.ac.at/display/IAVSC/CAE+Software|how to start the GUI of your CAE software]] |
- | + | ||
- | < | + | |
- | module avail 2>&1 | grep -i Ansys | + | |
- | </ | + | |
- | Load the correct version of Ansys, e.g., | + | |
- | + | ||
- | < | + | |
- | module load *your preferred module* | + | |
- | </ | + | |
- | + | ||
- | available modules on VSC-4: | + | |
- | * ANSYS/ | + | |
- | * ANSYS/ | + | |
- | * ANSYS/ | + | |
- | + | ||
- | ---- | + | |
- | + | ||
- | ===== General Workflow ===== | + | |
- | + | ||
- | The subsequent figure shows an overview of the general workflow if you use Fluent on your local machine for Pre- and Postprocessing and the cluster for solving your case, respectively.\\ For this workflow a graphical connection isn't necessary. | + | |
- | + | ||
- | All files needed for this testcase are provided | + | |
- | {{ : | + | |
- | + | ||
- | {{ : | + | |
- | + | ||
- | + | ||
- | ===== Input file ===== | + | |
- | + | ||
- | Create a journal file (fluent.jou) which is written in a dialect of Lisp called Scheme and contains all the instructions that are to be executed during the run. A basic form of this file, is as follows: | + | |
- | + | ||
- | < | + | |
- | # ----------------------------------------------------------- | + | |
- | # SAMPLE JOURNAL FILE | + | |
- | # | + | |
- | # read case file (*.cas.gz) that had previously been prepared | + | |
- | file/ | + | |
- | file/ | + | |
- | solve/ | + | |
- | solve/ | + | |
- | file/ | + | |
- | exit yes | + | |
- | </ | + | |
- | The '' | + | |
- | + | ||
- | {{ : | + | |
- | + | ||
- | Keep in mind to set the appropriate path for the cluster. Here the files will be saved in the same directory as the journal file is located. It could be better for the sake of clarity to create a additional directory for this backupfiles, | + | |
- | + | ||
- | ---- | + | |
- | + | ||
- | + | ||
- | ===== Job script ===== | + | |
- | + | ||
- | A script for running Ansys/ | + | |
- | + | ||
- | < | + | |
- | #!/bin/sh | + | |
- | #SBATCH -J fluent | + | |
- | #SBATCH -N 2 | + | |
- | #SBATCH -o job.%j.out | + | |
- | #SBATCH --ntasks-per-node=24 | + | |
- | #SBATCH --threads-per-core=1 | + | |
- | #SBATCH --time=04: | + | |
- | + | ||
- | module purge | + | |
- | module load *your preferred module* | + | |
- | + | ||
- | JOURNALFILE=fluent.jou | + | |
- | + | ||
- | if [ $SLURM_NNODES -eq 1 ]; then | + | |
- | # Single node with shared memory | + | |
- | fluent 3ddp -g -t $SLURM_NTASKS -i $JOURNALFILE > fluent.log | + | |
- | else | + | |
- | # Multi-node | + | |
- | fluent 3ddp \ # call fluent with 3D double precision solver | + | |
- | -g \ # run without GUI | + | |
- | -slurm -t $SLURM_NTASKS \ # run via SLURM with NTASKS | + | |
- | -pinfiniband \ # use Infiniband interconnect | + | |
- | -mpi=openmpi \ # use IntelMPI | + | |
- | -i $JOURNALFILE > fluent.log | + | |
- | fi | + | |
- | + | ||
- | </ | + | |
- | + | ||
- | This job script allows a variable definition of desired configuration. You can manipulate the number of compute nodes very easily and the job script generates the appropriate command to start the calculation with Fluent. | + | |
- | + | ||
- | ---- | + | |
- | + | ||
- | ===== License server settings ===== | + | |
- | + | ||
- | These variables are defined when loading the fluent module file: | + | |
- | + | ||
- | < | + | |
- | setenv | + | |
- | setenv | + | |
- | </code> | + | |
- | + | ||
- | ---- | + | |
- | + | ||
- | ===== Submit job ===== | + | |
- | + | ||
- | < | + | |
- | sbatch fluent_run.sh | + | |
- | </code> | + | |
- | + | ||
- | ---- | + | |
- | + | ||
- | ===== Restarting a calculation ===== | + | |
- | + | ||
- | To restart a fluent job, you can read in the latest data file: | + | |
- | + | ||
- | < | + | |
- | # read case file (*.cas.gz) that had previously been prepared | + | |
- | file/ | + | |
- | file/read-data " | + | |
- | solve/init/initialize-flow | + | |
- | solve/ | + | |
- | file/ | + | |
- | exit yes | + | |
- | </ | + | |
- | + | ||
- | ---- | + | |
- | + | ||
- | ====== ABAQUS ====== | + | |
- | + | ||
- | ===== ABAQUS 2016 ===== | + | |
- | + | ||
- | ==== Sample job script ==== | + | |
- | + | ||
- | '' | + | |
- | + | ||
- | < | + | |
- | # | + | |
- | # | + | |
- | #SBATCH -J abaqus | + | |
- | #SBATCH -N 2 | + | |
- | #SBATCH -o job.%j.out | + | |
- | #SBATCH -p E5-2690v4 | + | |
- | #SBATCH -q E5-2690v4-batch | + | |
- | #SBATCH --ntasks-per-node=8 | + | |
- | #SBATCH --mem=16G | + | |
- | + | ||
- | module purge | + | |
- | module load Abaqus/ | + | |
- | + | ||
- | export LM_LICENSE_FILE=< | + | |
- | + | ||
- | # specify some variables: | + | |
- | JOBNAME=My_job_name | + | |
- | INPUT=My_Abaqus_input.inp | + | |
- | SCRATCHDIR="/ | + | |
- | + | ||
- | # MODE can be ' | + | |
- | # | + | |
- | MODE=" | + | |
- | + | ||
- | scontrol show hostname $SLURM_NODELIST | + | |
- | cpu=`expr $SLURM_NTASKS / $SLURM_JOB_NUM_NODES` | + | |
- | echo $cpu | + | |
- | + | ||
- | mp_host_list=" | + | |
- | for i in $(cat hostlist) | + | |
- | do | + | |
- | mp_host_list=" | + | |
- | done | + | |
- | + | ||
- | mp_host_list=`echo ${mp_host_list} | sed -e " | + | |
- | + | ||
- | echo " | + | |
- | + | ||
- | abaqus interactive job=$JOBNAME cpus=$SLURM_NTASKS mp_mode=$MODE scratch=$SCRATCHDIR input=$INPUT | + | |
- | </ | + | |
- | + | ||
- | + | ||
- | ---- | + | |
- | + | ||
- | ===== ABAQUS 2016 ===== | + | |
- | + | ||
- | ==== Checkpointing and restart ==== | + | |
- | + | ||
- | Users sometimes find that their jobs take longer than the maximaum runtime permitted by the scheduler | + | |
- | + | ||
- | This will create a restart file (.res file extension) from which a job that is killed can be restarted. | + | |
- | + | ||
- | - Activate | + | |
- | + | ||
- | < | + | |
- | *restart, write | + | |
- | </ | + | |
- | at the top of your input file and run your job as normal. It should produce a restart file with a .res file extension. | + | |
- | + | ||
- | < | + | |
- | < | + | |
- | + | ||
- | < | + | |
- | abaqus job=jobName oldjob=oldjobName ... | + | |
- | </ | + | |
- | where oldJobName is the initial input file and newJobName is a file which contains only the line: | + | |
- | + | ||
- | < | + | |
- | *restart, read | + | |
- | </ | + | |
- | + | ||
- | + | ||
- | ---- | + | |
- | + | ||
- | ===== ABAQUS 2016 ===== | + | |
- | + | ||
- | ==== Checkpointing and restart ==== | + | |
- | + | ||
- | Example: | + | |
- | + | ||
- | INPUT: [[examples/ | + | |
- | + | ||
- | JOB SCRIPT: [[examples/ | + | |
- | + | ||
- | INPUT FOR RESTART: [[examples/ | + | |
- | + | ||
- | + | ||
- | ---- | + | |
- | + | ||
- | ====== COMSOL ====== | + | |
- | + | ||
- | ===== Module ===== | + | |
- | + | ||
- | Available version of Comsol can be found by executing the following line: | + | |
- | + | ||
- | < | + | |
- | module avail 2>&1 | grep -i comsol | + | |
- | </ | + | |
- | Currently these versions can be loaded: | + | |
- | * Comsol/ | + | |
- | * Comsol/ | + | |
- | + | ||
- | < | + | |
- | module load *your preferred module* | + | |
- | </ | + | |
- | + | ||
- | ---- | + | |
- | + | ||
- | ===== Workflow ===== | + | |
- | + | ||
- | In general you define your complete case on your local machine and save it as *.mph file.\\ These file contains all necessary information to run a successfull calculation. | + | |
- | + | ||
- | ---- | + | |
- | + | ||
- | ===== Job script ===== | + | |
- | + | ||
- | An example of a Job script is shown below. | + | |
- | + | ||
- | < | + | |
- | # | + | |
- | # slurmsubmit.sh | + | |
- | + | ||
- | #SBATCH --nodes=1 | + | |
- | #SBATCH --ntasks-per-node=24 | + | |
- | #SBATCH --job-name=" | + | |
- | #SBATCH --partition=mem_0384 | + | |
- | #SBATCH --qos=mem_0384 | + | |
- | + | ||
- | module purge | + | |
- | module load Comsol/ | + | |
- | + | ||
- | MODELTOCOMPUTE=" | + | |
- | path=$(pwd) | + | |
- | + | ||
- | INPUTFILE=" | + | |
- | OUTPUTFILE=" | + | |
- | BATCHLOG=" | + | |
- | + | ||
- | echo " | + | |
- | echo $INPUTFILE | + | |
- | echo " | + | |
- | echo $OUTPUTFILE | + | |
- | echo " | + | |
- | echo $BATCHLOG | + | |
- | echo "and the usual slurm...out" | + | |
- | + | ||
- | # COMSOL' | + | |
- | comsol batch -mpibootstrap slurm -inputfile ${INPUTFILE} -outputfile ${OUTPUTFILE} -batchlog ${BATCHLOG} -alivetime 15 -recover -mpidebug 10 | + | |
- | </ | + |