Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
doku:rstat [2015/04/14 13:14] ssenonerdoku:rstat [2021/09/29 12:25] (current) – [Example] goldenberg
Line 3: Line 3:
 [[http://www.r-project.org/]] [[http://www.r-project.org/]]
  
-We installed the libraries Rmpi, doMPI and foreach and their dependencies on VSC2 and VSC3+We installed the libraries Rmpi, doMPI and foreach and their dependencies on VSC2 and VSC4
-This libraries give you the possibilty to parallelize loops over more nodes with MPI using the foreach function which is very similar to a for loop.+These libraries give you the possibility to parallelize loops over more nodes with MPI using the foreach function which is very similar to a for loop.
  
-Example given a simple for loop:+==== Example ==== 
 + 
 +Given a simple for loop:
 <code> <code>
 for (i in 1:50) {mean(rnorm(1e+07))} for (i in 1:50) {mean(rnorm(1e+07))}
 </code> </code>
  
-sequentially it takes on VSC3:+=== Sequential execution === 
 + 
 +Sequential execution on VSC3 leads to an execution time [s] of:
 <code> <code>
 /opt/sw/R/current/bin/R /opt/sw/R/current/bin/R
Line 19: Line 23:
 </code> </code>
  
-using this script on VSC2: +=== Parallel execution ===
-<code> +
-#$ -N rstat +
-#$ -V +
-#$ -pe mpich 16 +
-#$ -l h_rt=01:00:00 +
-echo $NSLOTS +
-mpirun -machinefile $TMPDIR/machines -np 16 ~/sw/rstat/bin/R CMD BATCH berk-rmpi.R +
-</code> +
- +
-using this script on VSC3: +
-<code> +
-#!/bin/sh +
-#SBATCH -J rstat +
-#SBATCH -N 1 +
-#SBATCH --tasks-per-node=16 +
- +
-module unload intel +
-module unload intel-mpi/+
-module load intel/15.0.2 +
-module load intel-mpi/4.1.3.048 +
- +
-export I_MPI_FABRICS=shm:tcp +
- +
-mpirun ~/sw/rstat/bin/R CMD BATCH berk-rmpi.R +
-</code>+
  
-berk-rmpi.R:+In R, the code **berk-rmpi.R** may be parallelized in the following form:
 <code> <code>
 # basic example with foreach # basic example with foreach
 # start R as usual:'R'or via a batch job # start R as usual:'R'or via a batch job
-library (Rmpi)+library (Rmpi)             
 library (doMPI) library (doMPI)
 cl <- startMPIcluster () cl <- startMPIcluster ()
Line 63: Line 42:
 </code> </code>
  
-takes on VSC2:+On VSC3 the script reads: 
 +<code> 
 +#!/bin/sh 
 +#SBATCH -J rstat 
 +#SBATCH -N 1 
 +#SBATCH --tasks-per-node=16 
 + 
 +module unload intel-mpi/
 +module load intel-mpi/4.1.3.048 
 +module load R 
 + 
 +export I_MPI_FABRICS=shm:tcp 
 + 
 +mpirun R CMD BATCH berk-rmpi.R 
 +</code> 
 +yielding to an execution time [s] of 
 <code> <code>
 > proc.time() > proc.time()
    user  system elapsed     user  system elapsed 
-  8.495   0.264   9.616 +  4.566   0.156   5.750 
 </code> </code>
  • doku/rstat.1429017259.txt.gz
  • Last modified: 2015/04/14 13:14
  • by ssenoner