Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
doku:seriell2par [2015/04/28 14:11] ir [Fortran and ANSI c] |
doku:seriell2par [2021/05/14 13:35] (current) goldenberg [Message Passing Interface (MPI)] |
||
---|---|---|---|
Line 3: | Line 3: | ||
===== Message Passing Interface (MPI) ===== | ===== Message Passing Interface (MPI) ===== | ||
Message Passing Interface (MPI) is a standard describing the message exchange on distributed computing systems. Generally, at the beginning of an MPI-application several communicating processes are started in parallel. All processes work on the same problem and exchange data between each other. | Message Passing Interface (MPI) is a standard describing the message exchange on distributed computing systems. Generally, at the beginning of an MPI-application several communicating processes are started in parallel. All processes work on the same problem and exchange data between each other. | ||
+ | |||
(See also | (See also | ||
[[http:// | [[http:// | ||
Line 9: | Line 10: | ||
There are several implementations of the MPI-standard, | There are several implementations of the MPI-standard, | ||
- | Most common languages to work with MPI are Fortran, C, or C++. However, there are also bindings to other languages, e.g. Perl, Python, R, Ruby, Java, or CL. | + | Most common languages to work with MPI are **Fortran**, **C**, or **C++**. However, there are also bindings to other languages, e.g., Perl, Python, R, Ruby, Java, or CL. |
- | ===== Other parallel environments/ | + | |
+ | The parallel version of **Rstat** is [[rstat&# | ||
+ | |||
+ | [[http:// | ||
+ | |||
+ | Grid computing in [[http:// | ||
+ | |||
+ | ===== Open Multi-Processing (Open MP) ===== | ||
+ | Open MP supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, providing compiler directives, library routines, and environment variables influencing run-time behavior. | ||
+ | |||
+ | The code consists of serial and parallel sections. At the beginning, there is one serial thread which calls several parallel threads within a parallel section. | ||
+ | Contrary to MPI, the threads have access to the same memory and therefore no message passing between the threads is necessary. | ||
- | [[http://de.mathworks.com/help/ | + | < |
- | [[http://www.maplesoft.com/support/help/Maple/view.aspx? | + | For further information and training please look at the [[http://typo3.vsc.ac.at/research/vsc-research-center/vsc-school-seminar/|VSC Website]] ! |
- | provide toolboxes for parallel computing. | + | |
- | The parallel version | + | Continuously, |