Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | |||
doku:seriell2par [2024/07/22 08:47] – [Open Multi-Processing (Open MP)] grokyta | doku:seriell2par [2024/07/22 09:24] (current) – grokyta | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== From serial | + | ====== From Serial |
- | ===== Message Passing Interface (MPI) ===== | + | Parallel |
- | Message Passing Interface (MPI) is a standard describing the message exchange on distributed | + | |
- | (See also | + | ===== Message Passing Interface |
- | [[http:// | + | |
- | [[http:// | + | |
- | There are several implementations of the MPI-standard, including MPICH, Open MPI and many more (supporting | + | The Message Passing Interface (MPI) is a standardized protocol for communication in distributed computing systems. MPI is designed for scenarios where multiple processes |
- | Most common languages to work with MPI are **Fortran**, **C**, or **C++**. However, there are also bindings to other languages, e.g., Perl, Python, R, Ruby, Java, or CL. | + | * **MPI Documentation: |
+ | | ||
+ | | ||
- | The parallel version of **Rstat** | + | **Key Points:** |
+ | - MPI implementations include MPICH, Open MPI, and others. | ||
+ | - Common programming languages for MPI include **Fortran**, | ||
+ | - There are also MPI bindings for other languages such as Perl, Python, R, Ruby, Java, and CL. | ||
+ | - The parallel version of **Rstat**, known as **Rmpi**, is available | ||
+ | - **Matlab** provides a parallel computing toolbox with MPI support. | ||
+ | - **Maple** offers grid computing capabilities with MPI protocol support. | ||
- | [[http:// | + | ===== Open Multi-Processing (OpenMP) ===== |
- | Grid computing | + | OpenMP is designed for multi-platform shared memory multiprocessing, |
- | ===== Open Multi-Processing (Open MP) ===== | + | **Key Features: |
- | Open MP supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, providing compiler directives, library routines, and environment variables influencing run-time behavior. | + | - OpenMP uses a combination of serial and parallel sections within the code. |
+ | - Initially, a serial thread is used, which then spawns several parallel threads. | ||
+ | - Unlike MPI, OpenMP threads share the same memory space, eliminating the need for message passing between threads. | ||
- | The code consists of serial | + | ===== Further Information |
- | Contrary to MPI, the threads have access to the same memory and therefore no message passing between the threads is necessary. | + | |
- | <color # | + | For more information and training |
- | For further | + | |
+ | If you have specific questions about your code that cannot be addressed in these courses, you can directly [[doku: | ||
- | Continuously, |