Differences
This shows you the differences between two versions of the page.
Next revision | Previous revisionLast revisionBoth sides next revision | ||
doku:rstat [2015/04/14 12:50] – created ssenoner | doku:rstat [2021/09/29 12:24] – [Using R on VSC2 and VSC3 with MPI libraries] goldenberg | ||
---|---|---|---|
Line 3: | Line 3: | ||
[[http:// | [[http:// | ||
- | We installed the libraries Rmpi, doMPI and foreach and their dependencies on VSC2 and VSC3. | + | We installed the libraries Rmpi, doMPI and foreach and their dependencies on VSC2 and VSC4. |
- | This libraries give you the possibilty | + | These libraries give you the possibility |
- | Example | + | ==== Example |
+ | |||
+ | Given a simple for loop: | ||
< | < | ||
for (i in 1:50) {mean(rnorm(1e+07))} | for (i in 1:50) {mean(rnorm(1e+07))} | ||
</ | </ | ||
- | sequentially it takes on VSC3: | + | === Sequential execution === |
+ | |||
+ | Sequential execution | ||
< | < | ||
/ | / | ||
Line 19: | Line 23: | ||
</ | </ | ||
- | using this script | + | === Parallel execution === |
+ | |||
+ | In R, the code **berk-rmpi.R** may be parallelized in the following form: | ||
+ | < | ||
+ | # basic example with foreach | ||
+ | # start R as usual:' | ||
+ | library (Rmpi) | ||
+ | library (doMPI) | ||
+ | cl <- startMPIcluster () | ||
+ | registerDoMPI (cl) | ||
+ | |||
+ | result <- foreach (i = 1:50) %dopar% { | ||
+ | mean(rnorm(1e+07)) | ||
+ | } | ||
+ | |||
+ | closeCluster(cl) | ||
+ | mpi.finalize() | ||
+ | </ | ||
+ | |||
+ | On VSC2, a batch job is submitted by using the following | ||
< | < | ||
#$ -N rstat | #$ -N rstat | ||
Line 25: | Line 48: | ||
#$ -pe mpich 16 | #$ -pe mpich 16 | ||
#$ -l h_rt=01: | #$ -l h_rt=01: | ||
- | echo $NSLOTS | ||
- | mpirun -machinefile $TMPDIR/ | ||
- | </ | ||
- | using this script on VSC3: | + | mpirun -machinefile $TMPDIR/ |
+ | </ | ||
+ | yielding to an execution time [s] of | ||
+ | < | ||
+ | > proc.time() | ||
+ | | ||
+ | 8.495 | ||
+ | </ | ||
+ | On VSC3 the script reads: | ||
< | < | ||
#!/bin/sh | #!/bin/sh | ||
Line 36: | Line 64: | ||
#SBATCH --tasks-per-node=16 | #SBATCH --tasks-per-node=16 | ||
- | module unload intel | ||
module unload intel-mpi/5 | module unload intel-mpi/5 | ||
- | module load intel/ | ||
module load intel-mpi/ | module load intel-mpi/ | ||
+ | module load R | ||
export I_MPI_FABRICS=shm: | export I_MPI_FABRICS=shm: | ||
- | mpirun | + | mpirun R CMD BATCH berk-rmpi.R |
</ | </ | ||
- | + | yielding to an execution time [s] of | |
- | berk-rmpi.R: | + | |
- | < | + | |
- | # basic example with foreach | + | |
- | # start R as usual:' | + | |
- | library (Rmpi) | + | |
- | library (doMPI) | + | |
- | cl <- startMPIcluster () | + | |
- | registerDoMPI (cl) | + | |
- | clusterSize (cl) | + | |
- | + | ||
- | result <- foreach (i = 1:50) %dopar% { | + | |
- | mean(rnorm(1e+07)) | + | |
- | } | + | |
- | + | ||
- | closeCluster(cl) | + | |
- | mpi.finalize() | + | |
- | </ | + | |
- | + | ||
- | takes on VSC2: | + | |
< | < | ||
> proc.time() | > proc.time() | ||
| | ||
- | | + | |
</ | </ |