Please enable JavaScript to view this site.

GeoDict User Guide 2025

Parallelization Options

GeoDict commands can use parallelization as a method to speed-up computational processes. For those commands, you can choose how many parallel processes should be used for the parallelization in the command's Options dialog. Using multiple processes usually requires additional licenses for each process.

HPC-Parallelization

Note-KnowHow

Know how! The parallelization of a solver can use three different technical methods:

  1. Local-MPI parallelization,
  2. Distributed-MPI parallelization, and
  3. Local-Thread parallelization.

MPI stands for Message Passing Interface and is a standardized and portable message-passing standard designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. Solvers that use MPI are started multiple times as different instances. Each instance performs a simulation on a sub-volume of the whole structure. MPI is used to send data of intermediate results between the running instances. Local-MPI parallelization works within the same computer while Distributed-MPI works across different computers. When Local-MPI is used data can be exchanged very efficiently while Distributed-MPI has to use Ethernet or fast interconnection like InfiniBand to send data between different computers.

Local-thread parallel solvers are started only once per simulation. Thus, no data has to be exchanged between different instances. But these solvers cannot use multiple compute nodes of a cluster to distribute the simulation data.

Click on the Parallelization's Edit... button to open the Parallelization Options dialog. In this dialog, you can select between up to four different modes:

OpenSequential

OpenParallel (Shared Memory)

OpenAutomatic Maximum of Threads

OpenCluster

©2025 created by Math2Market GmbH / Imprint / Privacy Policy