Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
doku:ansys [2021/10/22 09:27] – [Pre- and postprocessing locally & computation on VSC] irdoku:ansys [2024/04/25 08:14] (current) – [Job script] amelic
Line 1: Line 1:
-====== ANSYS-Fluent (CFD) ======+====== Ansys Workbench and Ansys Fluent (CFD) ======
  
 +===== General Workflow =====
 +To generate your workflow for Ansys most effectively, we recommend utilizing the interactive access, generating the workflow there (noMachine). Additionally, instructions for the RSM Scheduler, are available:[[https://colab.tuwien.ac.at/display/IAVSC/ANSYS|TUCOLAB- Ansys]]
 ===== Module ===== ===== Module =====
  
-Check available versions of Ansys:+Discover the available versions of Ansys by executing the following command in your terminal:
  
 <code> <code>
 module avail 2>&1 | grep -i Ansys module avail 2>&1 | grep -i Ansys
 </code> </code>
-Load the correct version of Ansys, e.g.,+and load your preferred Ansys module, e.g.,
  
 <code> <code>
Line 14: Line 16:
 </code> </code>
  
-available modules on VSC-4: 
-  * ANSYS/2019R3 
-  * ANSYS/2020R2 
-  * ANSYS/2021R1 
  
----- 
- 
-===== General Workflow ===== 
 ==== Pre- and postprocessing locally & remote computation on VSC ==== ==== Pre- and postprocessing locally & remote computation on VSC ====
 +For utilizing Ansys Workbench and Ansys Fluent, the RSM Scheduler presents an effective option. You can find comprehensive guidance on its usage here:[[https://colab.tuwien.ac.at/display/IAVSC/ANSYS+RSM|TU COLAB- Ansys RSM]]
  
-The subsequent figure shows an overview of the general workflow if you use Fluent on your local machine for pre- and postprocessing and the cluster for solving your caserespectively.\\ For this workflow a graphical connection isn't necessary.+===Ansys Fluent=== 
 + 
 +The following provides a comprehensive overview of a typical workflow scenario when utilizing Fluent on your local machine for pre- and post-processing, or through noMachine (we highly recommend the use of noMachine for interactive access), along with instructions on how to submit the Fluent job to the cluster. 
 + 
 +For learning and testing how to submit a job to the cluster on one node with multiple cores (24), we provide a test file for Fluent; instructions: 
 + 
 +  - Start Ansys Fluent and open the .cas file. 
 +  - Write a .cas and .dat file out. 
 +  - Submit your Slurm submit script with the following command in the terminal: <code> 
 +sbatch fluent_run.sh 
 +</code> 
 +  - Check the allocated number of processes using the terminal commands: logging into the computing nodes and running  **htop** and **top**. 
 +  - Check the status of your job in the fluent.out file (not the slurm.out file). 
 +  - When the job is finishedopen/write the .dat file and check the results.
  
 All files needed for this testcase are provided here: All files needed for this testcase are provided here:
Line 34: Line 43:
 ===== Input file ===== ===== Input file =====
  
-Create a journal file (fluent.jou) which is written in dialect of Lisp called Scheme and contains all the instructions that are to be executed during the run. A basic form of this fileis as follows:+You can now create your own input file (**Fluent journal file** (fluent.jou)). If you choose to do so, please write it using Linux editor. Be mindful that sometimes converting tools from Windows editor text files to Linux editor text files do not work properly. 
 + 
 +This file instructs Ansys Fluent to read, run, and write files. 
 + 
 +Alternatively, you may consider adding commands in the Console (TUI) of Ansys or using the Ansys GUI. However, we recommend this approach only for smaller files, primarily for training and testing individual commands. The TUI is primarily intended for experimenting with commands in smaller Ansys Fluent files, with the intention of later writing the necessary commands in the input file (for bigger files). 
 + 
 +A basic template for the journal file is provided below:
  
 <code> <code>
 # ----------------------------------------------------------- # -----------------------------------------------------------
 # SAMPLE JOURNAL FILE # SAMPLE JOURNAL FILE
-# 
 # read case file (*.cas.gz) that had previously been prepared # read case file (*.cas.gz) that had previously been prepared
 file/read-case "tubench1p4b.cas.gz" file/read-case "tubench1p4b.cas.gz"
-file/autosave/data-frequency 10 
 solve/init/initialize-flow solve/init/initialize-flow
 solve/iterate 500 solve/iterate 500
Line 48: Line 61:
 exit yes exit yes
 </code> </code>
-The ''%%autosave/data-frequency%%'' setting will save a *.dat file every 10 iterations.\\ But preferably do this settings in the GUI as shown in subsequent graphic. 
  
-{{ :doku:autosave_gui_bearb.png?nolink |}}+Preferably you can set these settings in the GUI, as shown here for the autosave frequency in the subsequent graphicPlease be mindful, that different Ansys versions have different settings!
  
-Keep in mind to set the appropriate path for the cluster. Here the files will be saved in the same directory as the journal file is located. It could be better for the sake of clarity to create a additional directory for this backupfiles, i.e. <code>./Autosave/*your_filename*.gz</code>+{{:doku:autosave_gui_bearb.png|}} 
 +Keep in mind to set the appropriate path for your directories for the cluster. Here the files will be saved in the same directory as the journal file is located.
  
 +The ''%%autosave/data-frequency%%'' setting will save a *.dat file every 10 iterations, the flow field is initialised to zero and then the iteration is started.
 ---- ----
  
  
 ===== Job script ===== ===== Job script =====
 +When executing Slurm jobs for Fluent, it's essential to consider specific parameters within the command, including:
  
-A script for running Ansys/Fluent called fluent_run.sh is shown below.+  * 2d: Indicates a job for 2D Fluent simulations. 
 +  * 2ddp: Denotes 2D Fluent jobs in double precision. 
 +  * 3d: Specifies a job for 3D Fluent simulations. 
 +  * 3ddp: Represents 3D Fluent jobs configured for double precision. 
 + 
 +These parameters play a crucial role in defining the type and precision of the Fluent simulations run through Slurm. 
 + 
 + 
 +==== Single node script ==== 
 + 
 +A script for running Ansys/Fluent called fluent_run.sh for a 3D case with double precision is shown below. 
  
 <code> <code>
 #!/bin/sh #!/bin/sh
 #SBATCH -J fluent #SBATCH -J fluent
-#SBATCH -N 2+#SBATCH -N 1
 #SBATCH -o job.%j.out #SBATCH -o job.%j.out
 #SBATCH --ntasks-per-node=24 #SBATCH --ntasks-per-node=24
-#SBATCH --threads-per-core=1 +
-#SBATCH --time=04:00:00+
  
 module purge module purge
Line 74: Line 98:
  
 JOURNALFILE=fluent.jou JOURNALFILE=fluent.jou
 +
 +    time fluent 3ddp -g -t 24 < "./$JOURNALFILE" > fluent.out 
 +
 +
 +</code>
 +
 +==== Multiple node script ====
 +A script for running Ansys/Fluent on multiple nodes on VSC-4:
 +
 +<code>
 +#!/bin/sh
 +#SBATCH -J fluent_1 # Job name
 +#SBATCH -N 2 # Number of nodes
 +#SBATCH --partition=mem_0096 # mem_0096, mem_0384, mem_0768
 +#SBATCH --qos=mem_0096       # devel_0096, mem_0096, mem_0384, mem_0768
 +#SBATCH --ntasks-per-node=48 # number of cores
 +#SBATCH --threads-per-core=1 # disable hyperthreading
 +
 +
 +module purge
 +module load ANSYS/2021R1 # or whichever ANSYS version you prefer
 +
 +unset SLURM_JOBID
 +export FLUENT_AFFINITY=0
 +export SLURM_ENABLED=1
 +export SCHEDULER_TIGHT_COUPLING=1
 +FL_SCHEDULER_HOST_FILE=slurm.${SLURM_JOB_ID}.hosts
 +/bin/rm -rf ${FL_SCHEDULER_HOST_FILE}
 +scontrol show hostnames "$SLURM_JOB_NODELIST" >> $FL_SCHEDULER_HOST_FILE
 +
 +JOURNALFILE=fluent_transient # .jou file
  
 if [ $SLURM_NNODES -eq 1 ]; then if [ $SLURM_NNODES -eq 1 ]; then
     # Single node with shared memory     # Single node with shared memory
-    fluent 3ddp -g -t $SLURM_NTASKS -i $JOURNALFILE > fluent.log +    fluent 3ddp -g -t $SLURM_NTASKS -i $JOURNALFILE.jou $JOURNALFILE.log
 else else
     # Multi-node     # Multi-node
-    fluent 3ddp                 # call fluent with 3D double precision solver +    fluent 3ddp -platform=intel -g -t $SLURM_NTASKS -pib.ofed -mpi=intel -cnf=${FL_SCHEDULER_HOST_FILE} -i $JOURNALFILE.jou $JOURNALFILE.log
-    -g \                          # run without GUI +
-    -slurm -t $SLURM_NTASKS \     # run via SLURM with NTASKS +
-    -pinfiniband \                # use Infiniband interconnect +
-    -mpi=openmpi \                # use IntelMPI +
-    -i $JOURNALFILE > fluent.log  # input file+
 fi fi
- 
 </code> </code>
  
  • doku/ansys.1634894834.txt.gz
  • Last modified: 2021/10/22 09:27
  • by ir