====== From serial to parallel jobs ====== ===== Message Passing Interface (MPI) ===== Message Passing Interface (MPI) is a standard describing the message exchange on distributed computing systems. Generally, at the beginning of an MPI-application several communicating processes are started in parallel. All processes work on the same problem and exchange data between each other. (See also [[http://www.mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf|MPI 3.0]] or [[http://www.mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf|MPI 2.2]]) There are several implementations of the MPI-standard, including MPICH, Open MPI and many more (supporting a certain MPI standard). Most common languages to work with MPI are **Fortran**, **C**, or **C++**. However, there are also bindings to other languages, e.g., Perl, Python, R, Ruby, Java, or CL. The parallel version of **Rstat** is [[rstatexample|Rmpi]] which is installed on VSC-3 and VSC-4. [[http://de.mathworks.com/help/distcomp|Matlab]] provides a parallel computing toolbox using MPI and PVM. Grid computing in [[http://www.maplesoft.com/support/help/Maple/view.aspx?path=Grid/Launch|Maple]] also supports the MPI protocol. ===== Open Multi-Processing (Open MP) ===== Open MP supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, providing compiler directives, library routines, and environment variables influencing run-time behavior. The code consists of serial and parallel sections. At the beginning, there is one serial thread which calls several parallel threads within a parallel section. Contrary to MPI, the threads have access to the same memory and therefore no message passing between the threads is necessary.