This version (2024/10/24 10:28) is a draft.
Approvals: 0/1The Previously approved version (2021/05/14 18:40) is available.
Approvals: 0/1The Previously approved version (2021/05/14 18:40) is available.
Working on binf nodes
For availabe bioinformatics nodes, see High performance parallel storage + large memory nodes
Interactive mode
1. VSC-3 > salloc -N 1 -p binf --qos normal_binf -C binf -L intel@vsc (... add --nodelist binf-13 to request a specific node) 2. VSC-3 > squeue -u $USER 3. VSC-3 > srun -n 4 hostname (... while still on the login node !) 4. VSC-3 > ssh binf-11 (... or whatever else node had been assigned) 5. VSC-3 > module purge 6. VSC-3 > module load intel/17 cd examples/09_special_hardware/binf icc -xHost -qopenmp sample.c export OMP_NUM_THREADS=8 ./a.out
<HTML> <!–slide 8–> </HTML>
Working on binf nodes cont.
SLURM submission slrm.sbmt.scrpt
#!/bin/bash # # usage: sbatch ./slrm.sbmt.scrpt # #SBATCH -J gmxbinfs #SBATCH -N 2 #SBATCH --partition binf #SBATCH --qos normal_binf #SBATCH -C binf #SBATCH --ntasks-per-node 24 #SBATCH --ntasks-per-core 1 module purge module load intel/17 intel-mkl/2017 intel-mpi/2017 gromacs/5.1.4_binf export I_MPI_PIN=1 export I_MPI_PIN_PROCESSOR_LIST=0-23 export I_MPI_FABRICS=shm:tmi export I_MPI_TMI_PROVIDER=psm2 export OMP_NUM_THREADS=1 export MDRUN_ARGS=" -dd 0 0 0 -rdd 0 -rcon 0 -dlb yes -dds 0.8 -tunepme -v -nsteps 10000 " mpirun -np $SLURM_NTASKS gmx_mpi mdrun ${MDRUN_ARGS} -s hSERT_5HT_PROD.0.tpr -deffnm hSERT_5HT_PROD.0 -px hSERT_5HT_PROD.0_px.xvg -pf hSERT_5HT_PROD.0_pf.xvg -swap hSERT_5HT_PROD.0.xvg
<HTML> <!–slide 9–> </HTML>