Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
doku:ansys [2021/10/22 09:26] – [Pre- & postprocessing on local machine and computation on VSC] ir | doku:ansys [2024/04/25 08:13] – [Single node script] amelic | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== | + | ====== |
+ | ===== General Workflow ===== | ||
+ | To generate your workflow for Ansys most effectively, | ||
===== Module ===== | ===== Module ===== | ||
- | Check available versions of Ansys: | + | Discover the available versions of Ansys by executing the following command in your terminal: |
< | < | ||
module avail 2>&1 | grep -i Ansys | module avail 2>&1 | grep -i Ansys | ||
</ | </ | ||
- | Load the correct version of Ansys, e.g., | + | and load your preferred |
< | < | ||
Line 14: | Line 16: | ||
</ | </ | ||
- | available modules on VSC-4: | ||
- | * ANSYS/ | ||
- | * ANSYS/ | ||
- | * ANSYS/ | ||
- | ---- | + | ==== Pre- and postprocessing locally & remote computation on VSC ==== |
+ | For utilizing Ansys Workbench and Ansys Fluent, the RSM Scheduler presents an effective option. You can find comprehensive guidance on its usage here: | ||
- | ===== General Workflow ===== | + | ===Ansys Fluent=== |
- | ==== Pre- and postprocessing on local machine & computation on VSC ==== | + | |
- | The subsequent figure shows an overview of the general | + | The following provides a comprehensive |
+ | |||
+ | For learning | ||
+ | |||
+ | - Start Ansys Fluent and open the .cas file. | ||
+ | - Write a .cas and .dat file out. | ||
+ | - Submit | ||
+ | sbatch fluent_run.sh | ||
+ | </ | ||
+ | - Check the allocated number of processes using the terminal commands: logging into the computing nodes and running | ||
+ | - Check the status of your job in the fluent.out file (not the slurm.out file). | ||
+ | - When the job is finished, open/write the .dat file and check the results. | ||
All files needed for this testcase are provided here: | All files needed for this testcase are provided here: | ||
Line 34: | Line 43: | ||
===== Input file ===== | ===== Input file ===== | ||
- | Create a journal file (fluent.jou) | + | You can now create your own input file (**Fluent |
+ | |||
+ | This file instructs Ansys Fluent to read, run, and write files. | ||
+ | |||
+ | Alternatively, | ||
+ | |||
+ | A basic template for the journal | ||
< | < | ||
# ----------------------------------------------------------- | # ----------------------------------------------------------- | ||
# SAMPLE JOURNAL FILE | # SAMPLE JOURNAL FILE | ||
- | # | ||
# read case file (*.cas.gz) that had previously been prepared | # read case file (*.cas.gz) that had previously been prepared | ||
file/ | file/ | ||
- | file/ | ||
solve/ | solve/ | ||
solve/ | solve/ | ||
Line 48: | Line 61: | ||
exit yes | exit yes | ||
</ | </ | ||
- | The '' | ||
- | {{ : | + | Preferably you can set these settings in the GUI, as shown here for the autosave frequency in the subsequent graphic. Please be mindful, that different Ansys versions have different settings! |
- | Keep in mind to set the appropriate path for the cluster. Here the files will be saved in the same directory as the journal file is located. | + | {{: |
+ | Keep in mind to set the appropriate path for your directories | ||
+ | The '' | ||
---- | ---- | ||
Line 59: | Line 73: | ||
===== Job script ===== | ===== Job script ===== | ||
- | A script for running Ansys/ | + | ==== Single node script ==== |
+ | |||
+ | When executing Slurm jobs for Fluent, it's essential to consider specific parameters within the command, including: | ||
+ | |||
+ | * 2d: Indicates a job for 2D Fluent simulations. | ||
+ | * 2ddp: Denotes 2D Fluent jobs in double precision. | ||
+ | * 3d: Specifies a job for 3D Fluent simulations. | ||
+ | * 3ddp: Represents 3D Fluent jobs configured for double precision. | ||
+ | |||
+ | These parameters play a crucial role in defining the type and precision of the Fluent simulations run through Slurm. | ||
+ | |||
+ | A script for running Ansys/ | ||
< | < | ||
#!/bin/sh | #!/bin/sh | ||
#SBATCH -J fluent | #SBATCH -J fluent | ||
- | #SBATCH -N 2 | + | #SBATCH -N 1 |
#SBATCH -o job.%j.out | #SBATCH -o job.%j.out | ||
#SBATCH --ntasks-per-node=24 | #SBATCH --ntasks-per-node=24 | ||
- | #SBATCH --threads-per-core=1 | + | |
- | #SBATCH --time=04: | + | |
module purge | module purge | ||
Line 74: | Line 98: | ||
JOURNALFILE=fluent.jou | JOURNALFILE=fluent.jou | ||
+ | |||
+ | time fluent 3ddp -g -t 24 < " | ||
+ | |||
+ | |||
+ | </ | ||
+ | |||
+ | ==== Multiple node script ==== | ||
+ | A script for running Ansys/ | ||
+ | |||
+ | < | ||
+ | #!/bin/sh | ||
+ | #SBATCH -J fluent_1 # Job name | ||
+ | #SBATCH -N 2 # Number of nodes | ||
+ | #SBATCH --partition=mem_0096 # mem_0096, mem_0384, mem_0768 | ||
+ | #SBATCH --qos=mem_0096 | ||
+ | #SBATCH --ntasks-per-node=48 # number of cores | ||
+ | #SBATCH --threads-per-core=1 # disable hyperthreading | ||
+ | |||
+ | |||
+ | module purge | ||
+ | module load ANSYS/ | ||
+ | |||
+ | unset SLURM_JOBID | ||
+ | export FLUENT_AFFINITY=0 | ||
+ | export SLURM_ENABLED=1 | ||
+ | export SCHEDULER_TIGHT_COUPLING=1 | ||
+ | FL_SCHEDULER_HOST_FILE=slurm.${SLURM_JOB_ID}.hosts | ||
+ | /bin/rm -rf ${FL_SCHEDULER_HOST_FILE} | ||
+ | scontrol show hostnames " | ||
+ | |||
+ | JOURNALFILE=fluent_transient # .jou file | ||
if [ $SLURM_NNODES -eq 1 ]; then | if [ $SLURM_NNODES -eq 1 ]; then | ||
# Single node with shared memory | # Single node with shared memory | ||
- | fluent 3ddp -g -t $SLURM_NTASKS -i $JOURNALFILE > fluent.log | + | fluent 3ddp -g -t $SLURM_NTASKS -i $JOURNALFILE.jou > $JOURNALFILE.log |
else | else | ||
# Multi-node | # Multi-node | ||
- | fluent 3ddp | + | fluent 3ddp -platform=intel |
- | | + | |
- | -slurm | + | |
- | | + | |
- | | + | |
- | | + | |
fi | fi | ||
- | |||
</ | </ | ||