Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
doku:cfd [2021/10/11 05:21]
sfrank [Submit job]
doku:cfd [2021/10/22 08:56]
sfrank
Line 1: Line 1:
-====== Computational Fluid Dynamics ======+====== Engineering ======
  
   * [[https://www.3ds.com/de/produkte-und-services/simulia/produkte/abaqus/|Simulia ABAQUS]],    * [[https://www.3ds.com/de/produkte-und-services/simulia/produkte/abaqus/|Simulia ABAQUS]], 
Line 5: Line 5:
   * [[https://www.openfoam.com|OpenFOAM]]   * [[https://www.openfoam.com|OpenFOAM]]
   * [[https://www.comsol.de/|COMSOL]]   * [[https://www.comsol.de/|COMSOL]]
- 
-====== ANSYS-Fluent (CFD) ====== 
- 
-===== Module ===== 
- 
-Check available versions of Ansys: 
- 
-<code> 
-module avail 2>&1 | grep -i Ansys 
-</code> 
-Load the correct version of Ansys, e.g., 
- 
-<code> 
-module load *your preferred module* 
-</code> 
- 
-available modules on VSC-4: 
-  * ANSYS/2019R3 
-  * ANSYS/2020R2 
-  * ANSYS/2021R1 
- 
----- 
- 
-===== General Workflow ===== 
- 
-The subsequent figure shows an overview of the general workflow if you use Fluent on your local machine for Pre- and Postprocessing and the cluster for solving your case, respectively.\\ For this workflow a graphical connection isn't necessary. 
- 
-All files needed for this testcase are provided here: 
-{{ :doku:fluent_testcase.zip |}} 
- 
-{{ :doku:fluent_workflow.png?nolink |}} 
- 
- 
-===== Input file ===== 
- 
-Create a journal file (fluent.jou) which is written in a dialect of Lisp called Scheme and contains all the instructions that are to be executed during the run. A basic form of this file, is as follows: 
- 
-<code> 
-# ----------------------------------------------------------- 
-# SAMPLE JOURNAL FILE 
-# 
-# read case file (*.cas.gz) that had previously been prepared 
-file/read-case "tubench1p4b.cas.gz" 
-file/autosave/data-frequency 10 
-solve/init/initialize-flow 
-solve/iterate 500 
-file/write-data "tubench1p4b.dat.gz" 
-exit yes 
-</code> 
-The ''%%autosave/data-frequency%%'' setting will save a *.dat file every 10 iterations.\\ But preferably do this settings in the GUI as shown in subsequent graphic. 
- 
-{{ :doku:autosave_gui_bearb.png?nolink |}} 
- 
-Keep in mind to set the appropriate path for the cluster. Here the files will be saved in the same directory as the journal file is located. It could be better for the sake of clarity to create a additional directory for this backupfiles, i.e. <code>./Autosave/*your_filename*.gz</code> 
- 
----- 
- 
- 
-===== Job script ===== 
- 
-A script for running Ansys/Fluent called fluent_run.sh is shown below. 
- 
-<code> 
-#!/bin/sh 
-#SBATCH -J fluent 
-#SBATCH -N 2 
-#SBATCH -o job.%j.out 
-#SBATCH --ntasks-per-node=24 
-#SBATCH --threads-per-core=1 
-#SBATCH --time=04:00:00 
- 
-module purge 
-module load *your preferred module* 
- 
-JOURNALFILE=fluent.jou 
- 
-if [ $SLURM_NNODES -eq 1 ]; then 
-    # Single node with shared memory 
-    fluent 3ddp -g -t $SLURM_NTASKS -i $JOURNALFILE > fluent.log  
-else 
-    # Multi-node 
-    fluent 3ddp  \                # call fluent with 3D double precision solver 
-    -g \                          # run without GUI 
-    -slurm -t $SLURM_NTASKS \     # run via SLURM with NTASKS 
-    -pinfiniband \                # use Infiniband interconnect 
-    -mpi=openmpi \                # use IntelMPI 
-    -i $JOURNALFILE > fluent.log  # input file 
-fi 
- 
-</code> 
- 
-This job script allows a variable definition of desired configuration. You can manipulate the number of compute nodes very easily and the job script generates the appropriate command to start the calculation with Fluent. 
- 
----- 
- 
-===== License server settings ===== 
- 
-These variables are defined when loading the fluent module file: 
- 
-<code> 
-setenv       ANSYSLI_SERVERS 2325@LICENSE.SERVER 
-setenv       ANSYSLMD_LICENSE_FILE 1055@LICENSE.SERVER 
-</code> 
- 
----- 
- 
-===== Submit job ===== 
- 
-<code> 
-sbatch fluent_run.sh 
-</code> 
- 
----- 
- 
-===== Restarting a calculation ===== 
- 
-To restart a fluent job, you can read in the latest data file: 
- 
-<code> 
-# read case file (*.cas.gz) that had previously been prepared 
-file/read-case "MyCaseFile.cas.gz" 
-file/read-data "MyCase_-1-00050.dat.gz"   # read latest data file and continue calculation 
-solve/init/initialize-flow 
-solve/iterate 500 
-file/write-data "MyCase.dat.gz" 
-exit yes 
-</code> 
- 
----- 
- 
-====== ABAQUS ====== 
- 
-===== ABAQUS 2016 ===== 
- 
-==== Sample job script ==== 
- 
-''%%/opt/ohpc/pub/examples/slurm/mul/abaqus%%'' 
- 
-<code> 
-#!/bin/bash 
-# 
-#SBATCH -J abaqus 
-#SBATCH -N 2 
-#SBATCH -o job.%j.out 
-#SBATCH -p E5-2690v4 
-#SBATCH -q E5-2690v4-batch 
-#SBATCH --ntasks-per-node=8 
-#SBATCH --mem=16G 
- 
-module purge 
-module load Abaqus/2016 
- 
-export LM_LICENSE_FILE=<license_port>@license_server>:$LM_LICENSE_FILE 
- 
-# specify some variables: 
-JOBNAME=My_job_name 
-INPUT=My_Abaqus_input.inp 
-SCRATCHDIR="/scratch" 
- 
-# MODE can be 'mpi' or 'threads': 
-#MODE="threads" 
-MODE="mpi" 
- 
-scontrol show hostname $SLURM_NODELIST | paste -d -s > hostlist 
-cpu=`expr $SLURM_NTASKS / $SLURM_JOB_NUM_NODES` 
-echo $cpu 
- 
-mp_host_list="(" 
-for i in $(cat hostlist) 
-do 
-  mp_host_list="${mp_host_list}('$i',$cpu)," 
-done 
- 
-mp_host_list=`echo ${mp_host_list} | sed -e "s/,$/,)/"` 
- 
-echo "mp_host_list=${mp_host_list}" >> abaqus_v6.env 
- 
-abaqus interactive job=$JOBNAME cpus=$SLURM_NTASKS mp_mode=$MODE scratch=$SCRATCHDIR input=$INPUT 
-</code> 
- 
- 
----- 
  
 ===== ABAQUS 2016 ===== ===== ABAQUS 2016 =====
Line 308: Line 126:
  
 <code> <code>
-sbatch fluent_run.sh+sbatch karman.job
 </code> </code>
 +
 +==== Using a shared node ====
 +
 +If your case isn't that demanding on hardware and you are interested in a fast solution, it is possible to use one of the shared nodes. These are non-exclusive nodes, thus more than just one job is able to use the provided hardware.
 +On these nodes you have to tell SLURM, how much memory (RAM) your case would need. This value should be less than the maximum of 96GB these nodes uses. Otherwise your job needs a whole node anyway.
 +Here we use --mem=20G, to dedicate 20GB of memory.
 +
 +<code>
 +#!/bin/bash
 +# slurmsubmit.sh
 +
 +#SBATCH -n 1
 +#SBATCH --ntasks-per-node=1
 +#SBATCH --job-name="clustsw"
 +#SBATCH --qos=mem_0096
 +#SBATCH --mem=20G
 +
 +hostname
 +
 +module purge
 +module load Comsol/5.6
 +module list
 +.
 +.
 +.
 +</code>
 +
  
 ---- ----
Line 321: Line 166:
  
  
-|                            Parameter / Function Sweep    ^            Batch Sweep           ^ Cluster Sweep ^ 
-^ Loop                    | inner or outer                   | outer                            | outer  | 
-^ Settings in GUI         | Enable distibute parameter sweep | Set number of simultaneous jobs\\        || 
-^ Entries on command line | --mpibootstrap slurm             | --mode desktop                           || 
-^ Description             | MPI -synch.                      | next job drops in free slots             || 
  
  • doku/cfd.txt
  • Last modified: 2023/12/23 09:58
  • by amelic