Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
doku:ansys [2021/10/22 10:17] – [Input file] sfrank | doku:ansys [2022/08/30 13:40] – ir | ||
---|---|---|---|
Line 60: | Line 60: | ||
Keep in mind to set the appropriate path for the cluster. Here the files will be saved in the same directory as the journal file is located. It could be better for the sake of clarity to create an additional directory for this backupfiles, | Keep in mind to set the appropriate path for the cluster. Here the files will be saved in the same directory as the journal file is located. It could be better for the sake of clarity to create an additional directory for this backupfiles, | ||
- | This is a relative reference to the path, where your *.cas.gz-file is located. | + | This is a relative reference to the path, where your *.cas.gz-file is located. Also keep in mind that the folder called " |
---- | ---- | ||
Line 67: | Line 67: | ||
===== Job script ===== | ===== Job script ===== | ||
+ | ==== Single node script ==== | ||
A script for running Ansys/ | A script for running Ansys/ | ||
Line 96: | Line 97: | ||
fi | fi | ||
+ | </ | ||
+ | |||
+ | ==== Multiple node script ==== | ||
+ | A script for running Ansys/ | ||
+ | |||
+ | < | ||
+ | #!/bin/sh | ||
+ | #SBATCH -J fluent_1 # Job name | ||
+ | #SBATCH -N 2 # Number of nodes | ||
+ | #SBATCH --partition=mem_0096 # mem_0096, mem_0384, mem_0768 | ||
+ | #SBATCH --qos=mem_0096 | ||
+ | #SBATCH --ntasks-per-node=48 # number of cores | ||
+ | #SBATCH --threads-per-core=1 # disable hyperthreading | ||
+ | |||
+ | |||
+ | module purge | ||
+ | module load ANSYS/ | ||
+ | |||
+ | unset SLURM_JOBID | ||
+ | export FLUENT_AFFINITY=0 | ||
+ | export SLURM_ENABLED=1 | ||
+ | export SCHEDULER_TIGHT_COUPLING=1 | ||
+ | FL_SCHEDULER_HOST_FILE=slurm.${SLURM_JOB_ID}.hosts | ||
+ | /bin/rm -rf ${FL_SCHEDULER_HOST_FILE} | ||
+ | scontrol show hostnames " | ||
+ | |||
+ | JOURNALFILE=fluent_transient # .jou file | ||
+ | |||
+ | if [ $SLURM_NNODES -eq 1 ]; then | ||
+ | # Single node with shared memory | ||
+ | fluent 3ddp -g -t $SLURM_NTASKS -i $JOURNALFILE.jou > $JOURNALFILE.log | ||
+ | else | ||
+ | # Multi-node | ||
+ | fluent 3ddp -platform=intel -g -t $SLURM_NTASKS -pib.ofed -mpi=intel -cnf=${FL_SCHEDULER_HOST_FILE} -i $JOURNALFILE.jou > $JOURNALFILE.log | ||
+ | fi | ||
</ | </ | ||