This version is outdated by a newer approved version.DiffThis version (2017/09/01 10:41) is a draft.
Approvals: 0/1

This is an old revision of the document!


From serial to parallel jobs

Message Passing Interface (MPI) is a standard describing the message exchange on distributed computing systems. Generally, at the beginning of an MPI-application several communicating processes are started in parallel. All processes work on the same problem and exchange data between each other.

(See also MPI 3.0 or MPI 2.2)

There are several implementations of the MPI-standard, including MPICH, Open MPI and many more (supporting a certain MPI standard).

Most common languages to work with MPI are Fortran, C, or C++. However, there are also bindings to other languages, e.g., Perl, Python, R, Ruby, Java, or CL.

The parallel version of Rstat is Rmpi which is installed on VSC-2 and VSC-3.

Matlab provides a parallel computing toolbox using MPI and PVM.

Grid computing in Maple also supports the MPI protocol.

Open MP supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, providing compiler directives, library routines, and environment variables influencing run-time behavior.

The code consists of serial and parallel sections. At the beginning, there is one serial thread which calls several parallel threads within a parallel section. Contrary to MPI, the threads have access to the same memory and therefore no message passing between the threads is necessary.

For further information and training please look at the VSC Website !

Continuously, the VSC-Team organises courses on beginners and intermediate level in the field of high performance computing. If you have specialised questions concerning your code that cannot be tackled in one of these courses, you can directly contact our team in order to individually solve your question.

  • doku/seriell2par.1504262504.txt.gz
  • Last modified: 2017/09/01 10:41
  • by ir