no way to compare when less than two revisions

Differences

This shows you the differences between two versions of the page.


Previous revision
Next revision
doku:ansys [2022/08/30 13:40] ir
Line 1: Line 1:
 +====== ANSYS-Fluent (CFD) ======
  
 +===== Module =====
 +
 +Check available versions of Ansys:
 +
 +<code>
 +module avail 2>&1 | grep -i Ansys
 +</code>
 +Load the correct version of Ansys, e.g.,
 +
 +<code>
 +module load *your preferred module*
 +</code>
 +
 +available modules on VSC-4:
 +  * ANSYS/2019R3
 +  * ANSYS/2020R2
 +  * ANSYS/2021R1
 +
 +----
 +
 +===== General Workflow =====
 +==== Pre- and postprocessing locally & remote computation on VSC ====
 +
 +The subsequent figure shows an overview of the general workflow if you use Fluent on your local machine for pre- and postprocessing and the cluster for solving your case, respectively.\\ For this workflow a graphical connection isn't necessary.
 +
 +All files needed for this testcase are provided here:
 +{{ :doku:fluent_testcase.zip |}}
 +
 +{{ :doku:fluent_workflow.png?nolink |}}
 +
 +
 +===== Input file =====
 +
 +Create a journal file (fluent.jou) which is written in a dialect of Lisp called Scheme. 
 +This file may be very short, i.e., instructing the code to be **read, run and written**.
 +However, it may contain up to every single instruction of a case file executed during the run. 
 +You always have the choice between writing a certain command in the journal file or in the graphical user interface (GUI).
 +
 +A basic form of the journal file reads:
 +
 +<code>
 +# -----------------------------------------------------------
 +# SAMPLE JOURNAL FILE
 +#
 +# read case file (*.cas.gz) that had previously been prepared
 +file/read-case "tubench1p4b.cas.gz"
 +file/autosave/data-frequency 10
 +solve/init/initialize-flow
 +solve/iterate 500
 +file/write-data "tubench1p4b.dat.gz"
 +exit yes
 +</code>
 +The ''%%autosave/data-frequency%%'' setting will save a *.dat file every 10 iterations, the flow field is initialised to zero and then the iteration is started.
 +
 +Preferably do these settings in the GUI, as shown here for the autosave frequency in the subsequent graphic.
 +
 +{{ :doku:autosave_gui_bearb.png?nolink |}}
 +
 +Keep in mind to set the appropriate path for the cluster. Here the files will be saved in the same directory as the journal file is located. It could be better for the sake of clarity to create an additional directory for this backupfiles, i.e. <code>./Autosave/*your_filename*.gz</code>
 +This is a relative reference to the path, where your *.cas.gz-file is located. Also keep in mind that the folder called "Autosave" already exists before running your job.
 +
 +----
 +
 +
 +===== Job script =====
 +
 +==== Single node script ====
 +A script for running Ansys/Fluent called fluent_run.sh is shown below.
 +
 +<code>
 +#!/bin/sh
 +#SBATCH -J fluent
 +#SBATCH -N 2
 +#SBATCH -o job.%j.out
 +#SBATCH --ntasks-per-node=24
 +#SBATCH --threads-per-core=1
 +#SBATCH --time=04:00:00
 +
 +module purge
 +module load *your preferred module*
 +
 +JOURNALFILE=fluent.jou
 +
 +if [ $SLURM_NNODES -eq 1 ]; then
 +    # Single node with shared memory
 +    fluent 3ddp -g -t $SLURM_NTASKS -i $JOURNALFILE > fluent.log 
 +else
 +    # Multi-node
 +    fluent 3ddp  \                # call fluent with 3D double precision solver
 +    -g \                          # run without GUI
 +    -slurm -t $SLURM_NTASKS \     # run via SLURM with NTASKS
 +    -pinfiniband \                # use Infiniband interconnect
 +    -mpi=openmpi \                # use IntelMPI
 +    -i $JOURNALFILE > fluent.log  # input file
 +fi
 +
 +</code>
 +
 +==== Multiple node script ====
 +A script for running Ansys/Fluent on multiple nodes on VSC-4:
 +
 +<code>
 +#!/bin/sh
 +#SBATCH -J fluent_1 # Job name
 +#SBATCH -N 2 # Number of nodes
 +#SBATCH --partition=mem_0096 # mem_0096, mem_0384, mem_0768
 +#SBATCH --qos=mem_0096       # devel_0096, mem_0096, mem_0384, mem_0768
 +#SBATCH --ntasks-per-node=48 # number of cores
 +#SBATCH --threads-per-core=1 # disable hyperthreading
 +
 +
 +module purge
 +module load ANSYS/2021R1 # or whichever ANSYS version you prefer
 +
 +unset SLURM_JOBID
 +export FLUENT_AFFINITY=0
 +export SLURM_ENABLED=1
 +export SCHEDULER_TIGHT_COUPLING=1
 +FL_SCHEDULER_HOST_FILE=slurm.${SLURM_JOB_ID}.hosts
 +/bin/rm -rf ${FL_SCHEDULER_HOST_FILE}
 +scontrol show hostnames "$SLURM_JOB_NODELIST" >> $FL_SCHEDULER_HOST_FILE
 +
 +JOURNALFILE=fluent_transient # .jou file
 +
 +if [ $SLURM_NNODES -eq 1 ]; then
 +    # Single node with shared memory
 +    fluent 3ddp -g -t $SLURM_NTASKS -i $JOURNALFILE.jou > $JOURNALFILE.log
 +else
 +    # Multi-node
 +    fluent 3ddp -platform=intel -g -t $SLURM_NTASKS -pib.ofed -mpi=intel -cnf=${FL_SCHEDULER_HOST_FILE} -i $JOURNALFILE.jou > $JOURNALFILE.log
 +fi
 +</code>
 +
 +This job script allows a variable definition of desired configuration. You can manipulate the number of compute nodes very easily and the job script generates the appropriate command to start the calculation with Fluent.
 +
 +----
 +
 +===== License server settings =====
 +
 +These variables are defined when loading the fluent module file:
 +
 +<code>
 +setenv       ANSYSLI_SERVERS 2325@LICENSE.SERVER
 +setenv       ANSYSLMD_LICENSE_FILE 1055@LICENSE.SERVER
 +</code>
 +
 +----
 +
 +===== Submit job =====
 +
 +<code>
 +sbatch fluent_run.sh
 +</code>
 +
 +----
 +
 +===== Restarting a calculation =====
 +
 +To restart a fluent job, you can read in the latest data file:
 +
 +<code>
 +# read case file (*.cas.gz) that had previously been prepared
 +file/read-case "MyCaseFile.cas.gz"
 +file/read-data "MyCase_-1-00050.dat.gz"   # read latest data file and continue calculation
 +solve/init/initialize-flow
 +solve/iterate 500
 +file/write-data "MyCase.dat.gz"
 +exit yes
 +</code>
 +
 +----
  • doku/ansys.txt
  • Last modified: 2024/04/25 08:14
  • by amelic