Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
doku:rstat [2015/04/14 13:14] – ssenoner | doku:rstat [2021/09/29 12:25] (current) – [Example] goldenberg | ||
---|---|---|---|
Line 3: | Line 3: | ||
[[http:// | [[http:// | ||
- | We installed the libraries Rmpi, doMPI and foreach and their dependencies on VSC2 and VSC3. | + | We installed the libraries Rmpi, doMPI and foreach and their dependencies on VSC2 and VSC4. |
- | This libraries give you the possibilty | + | These libraries give you the possibility |
- | Example | + | ==== Example |
+ | |||
+ | Given a simple for loop: | ||
< | < | ||
for (i in 1:50) {mean(rnorm(1e+07))} | for (i in 1:50) {mean(rnorm(1e+07))} | ||
</ | </ | ||
- | sequentially it takes on VSC3: | + | === Sequential execution === |
+ | |||
+ | Sequential execution | ||
< | < | ||
/ | / | ||
Line 19: | Line 23: | ||
</ | </ | ||
- | using this script on VSC2: | + | === Parallel execution === |
- | < | + | |
- | #$ -N rstat | + | |
- | #$ -V | + | |
- | #$ -pe mpich 16 | + | |
- | #$ -l h_rt=01:00:00 | + | |
- | echo $NSLOTS | + | |
- | mpirun -machinefile $TMPDIR/ | + | |
- | </ | + | |
- | + | ||
- | using this script on VSC3: | + | |
- | < | + | |
- | #!/bin/sh | + | |
- | #SBATCH -J rstat | + | |
- | #SBATCH -N 1 | + | |
- | #SBATCH --tasks-per-node=16 | + | |
- | + | ||
- | module unload intel | + | |
- | module unload intel-mpi/ | + | |
- | module load intel/ | + | |
- | module load intel-mpi/ | + | |
- | + | ||
- | export I_MPI_FABRICS=shm:tcp | + | |
- | + | ||
- | mpirun ~/ | + | |
- | </ | + | |
- | berk-rmpi.R: | + | In R, the code **berk-rmpi.R** may be parallelized in the following form: |
< | < | ||
# basic example with foreach | # basic example with foreach | ||
# start R as usual:' | # start R as usual:' | ||
- | library (Rmpi) | + | library (Rmpi) |
library (doMPI) | library (doMPI) | ||
cl <- startMPIcluster () | cl <- startMPIcluster () | ||
Line 63: | Line 42: | ||
</ | </ | ||
- | takes on VSC2: | + | On VSC3 the script reads: |
+ | < | ||
+ | #!/bin/sh | ||
+ | #SBATCH -J rstat | ||
+ | #SBATCH -N 1 | ||
+ | #SBATCH --tasks-per-node=16 | ||
+ | |||
+ | module unload intel-mpi/ | ||
+ | module load intel-mpi/ | ||
+ | module load R | ||
+ | |||
+ | export I_MPI_FABRICS=shm: | ||
+ | |||
+ | mpirun R CMD BATCH berk-rmpi.R | ||
+ | </ | ||
+ | yielding to an execution time [s] of | ||
< | < | ||
> proc.time() | > proc.time() | ||
| | ||
- | | + | |
</ | </ |