Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revisionBoth sides next revision
doku:rstat [2015/04/14 13:14] ssenonerdoku:rstat [2021/09/29 12:24] – [Using R on VSC2 and VSC3 with MPI libraries] goldenberg
Line 3: Line 3:
 [[http://www.r-project.org/]] [[http://www.r-project.org/]]
  
-We installed the libraries Rmpi, doMPI and foreach and their dependencies on VSC2 and VSC3+We installed the libraries Rmpi, doMPI and foreach and their dependencies on VSC2 and VSC4
-This libraries give you the possibilty to parallelize loops over more nodes with MPI using the foreach function which is very similar to a for loop.+These libraries give you the possibility to parallelize loops over more nodes with MPI using the foreach function which is very similar to a for loop.
  
-Example given a simple for loop:+==== Example ==== 
 + 
 +Given a simple for loop:
 <code> <code>
 for (i in 1:50) {mean(rnorm(1e+07))} for (i in 1:50) {mean(rnorm(1e+07))}
 </code> </code>
  
-sequentially it takes on VSC3:+=== Sequential execution === 
 + 
 +Sequential execution on VSC3 leads to an execution time [s] of:
 <code> <code>
 /opt/sw/R/current/bin/R /opt/sw/R/current/bin/R
Line 19: Line 23:
 </code> </code>
  
-using this script on VSC2:+=== Parallel execution === 
 + 
 +In R, the code **berk-rmpi.R** may be parallelized in the following form: 
 +<code> 
 +# basic example with foreach 
 +# start R as usual:'R'or via a batch job 
 +library (Rmpi)              
 +library (doMPI) 
 +cl <- startMPIcluster () 
 +registerDoMPI (cl) 
 + 
 +result <- foreach (i = 1:50) %dopar% { 
 +mean(rnorm(1e+07)) 
 +
 + 
 +closeCluster(cl) 
 +mpi.finalize() 
 +</code> 
 + 
 +On VSC2, a batch job  is submitted by using the following script:
 <code> <code>
 #$ -N rstat #$ -N rstat
Line 25: Line 48:
 #$ -pe mpich 16 #$ -pe mpich 16
 #$ -l h_rt=01:00:00 #$ -l h_rt=01:00:00
-echo $NSLOTS 
-mpirun -machinefile $TMPDIR/machines -np 16 ~/sw/rstat/bin/R CMD BATCH berk-rmpi.R 
-</code> 
  
-using this script on VSC3:+mpirun -machinefile $TMPDIR/machines -np $NSLOTS /opt/sw/R/current/bin/R CMD BATCH berk-rmpi.R 
 +</code> 
 +yielding to an execution time [s] of  
 +<code> 
 +> proc.time() 
 +   user  system elapsed  
 +  8.495   0.264   9.616  
 +</code> 
 +On VSC3 the script reads:
 <code> <code>
 #!/bin/sh #!/bin/sh
Line 36: Line 64:
 #SBATCH --tasks-per-node=16 #SBATCH --tasks-per-node=16
  
-module unload intel 
 module unload intel-mpi/5 module unload intel-mpi/5
-module load intel/15.0.2 
 module load intel-mpi/4.1.3.048 module load intel-mpi/4.1.3.048
 +module load R
  
 export I_MPI_FABRICS=shm:tcp export I_MPI_FABRICS=shm:tcp
  
-mpirun ~/sw/rstat/bin/R CMD BATCH berk-rmpi.R+mpirun R CMD BATCH berk-rmpi.R
 </code> </code>
- +yielding to an execution time [s] of 
-berk-rmpi.R: +
-<code> +
-# basic example with foreach +
-# start R as usual:'R'or via a batch job +
-library (Rmpi) +
-library (doMPI) +
-cl <- startMPIcluster () +
-registerDoMPI (cl) +
- +
-result <- foreach (i = 1:50) %dopar% { +
-mean(rnorm(1e+07)) +
-+
- +
-closeCluster(cl) +
-mpi.finalize() +
-</code> +
- +
-takes on VSC2:+
 <code> <code>
 > proc.time() > proc.time()
    user  system elapsed     user  system elapsed 
-  8.495   0.264   9.616 +  4.566   0.156   5.750 
 </code> </code>
  • doku/rstat.txt
  • Last modified: 2021/09/29 12:25
  • by goldenberg