Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
doku:ansys [2021/10/22 09:26] – [Pre- and postprocessing on local machine AND computation on VSC] irdoku:ansys [2022/08/30 13:40] ir
Line 22: Line 22:
  
 ===== General Workflow ===== ===== General Workflow =====
-==== Pre- & postprocessing on local machine and computation on VSC ====+==== Pre- and postprocessing locally & remote computation on VSC ====
  
 The subsequent figure shows an overview of the general workflow if you use Fluent on your local machine for pre- and postprocessing and the cluster for solving your case, respectively.\\ For this workflow a graphical connection isn't necessary. The subsequent figure shows an overview of the general workflow if you use Fluent on your local machine for pre- and postprocessing and the cluster for solving your case, respectively.\\ For this workflow a graphical connection isn't necessary.
Line 34: Line 34:
 ===== Input file ===== ===== Input file =====
  
-Create a journal file (fluent.jou) which is written in a dialect of Lisp called Scheme and contains all the instructions that are to be executed during the run. A basic form of this file, is as follows:+Create a journal file (fluent.jou) which is written in a dialect of Lisp called Scheme.  
 +This file may be very short, i.e., instructing the code to be **read, run and written**. 
 +However, it may contain up to every single instruction of a case file executed during the run.  
 +You always have the choice between writing a certain command in the journal file or in the graphical user interface (GUI). 
 + 
 +A basic form of the journal file reads:
  
 <code> <code>
Line 48: Line 53:
 exit yes exit yes
 </code> </code>
-The ''%%autosave/data-frequency%%'' setting will save a *.dat file every 10 iterations.\\ But preferably do this settings in the GUI as shown in subsequent graphic.+The ''%%autosave/data-frequency%%'' setting will save a *.dat file every 10 iterations, the flow field is initialised to zero and then the iteration is started. 
 + 
 +Preferably do these settings in the GUIas shown here for the autosave frequency in the subsequent graphic.
  
 {{ :doku:autosave_gui_bearb.png?nolink |}} {{ :doku:autosave_gui_bearb.png?nolink |}}
  
-Keep in mind to set the appropriate path for the cluster. Here the files will be saved in the same directory as the journal file is located. It could be better for the sake of clarity to create additional directory for this backupfiles, i.e. <code>./Autosave/*your_filename*.gz</code>+Keep in mind to set the appropriate path for the cluster. Here the files will be saved in the same directory as the journal file is located. It could be better for the sake of clarity to create an additional directory for this backupfiles, i.e. <code>./Autosave/*your_filename*.gz</code> 
 +This is a relative reference to the path, where your *.cas.gz-file is located. Also keep in mind that the folder called "Autosave" already exists before running your job.
  
 ---- ----
Line 59: Line 67:
 ===== Job script ===== ===== Job script =====
  
 +==== Single node script ====
 A script for running Ansys/Fluent called fluent_run.sh is shown below. A script for running Ansys/Fluent called fluent_run.sh is shown below.
  
Line 88: Line 97:
 fi fi
  
 +</code>
 +
 +==== Multiple node script ====
 +A script for running Ansys/Fluent on multiple nodes on VSC-4:
 +
 +<code>
 +#!/bin/sh
 +#SBATCH -J fluent_1 # Job name
 +#SBATCH -N 2 # Number of nodes
 +#SBATCH --partition=mem_0096 # mem_0096, mem_0384, mem_0768
 +#SBATCH --qos=mem_0096       # devel_0096, mem_0096, mem_0384, mem_0768
 +#SBATCH --ntasks-per-node=48 # number of cores
 +#SBATCH --threads-per-core=1 # disable hyperthreading
 +
 +
 +module purge
 +module load ANSYS/2021R1 # or whichever ANSYS version you prefer
 +
 +unset SLURM_JOBID
 +export FLUENT_AFFINITY=0
 +export SLURM_ENABLED=1
 +export SCHEDULER_TIGHT_COUPLING=1
 +FL_SCHEDULER_HOST_FILE=slurm.${SLURM_JOB_ID}.hosts
 +/bin/rm -rf ${FL_SCHEDULER_HOST_FILE}
 +scontrol show hostnames "$SLURM_JOB_NODELIST" >> $FL_SCHEDULER_HOST_FILE
 +
 +JOURNALFILE=fluent_transient # .jou file
 +
 +if [ $SLURM_NNODES -eq 1 ]; then
 +    # Single node with shared memory
 +    fluent 3ddp -g -t $SLURM_NTASKS -i $JOURNALFILE.jou > $JOURNALFILE.log
 +else
 +    # Multi-node
 +    fluent 3ddp -platform=intel -g -t $SLURM_NTASKS -pib.ofed -mpi=intel -cnf=${FL_SCHEDULER_HOST_FILE} -i $JOURNALFILE.jou > $JOURNALFILE.log
 +fi
 </code> </code>
  
  • doku/ansys.txt
  • Last modified: 2024/04/25 08:14
  • by amelic