Please enable JavaScript to view this site.

GeoDict User Guide 2025

MPI Configuration

MPI Installation

In Windows, the necessary Microsoft MPI library is installed during the GeoDict Windows installation.

In Linux, the necessary libraries have to be installed separately. Refer to the GeoDict Linux installation.

MPI Configuration

You can access the Solver MPI Parallelization configuration options through the main menu of GeoDict. Select Settings - Settings from the menu bar to open the dialog, and in the dialog open the Parallelization tab. (The GeoDict Thread Parallelization options are explained here.)

HPC-MPISettings-Automatic

Node File Environment Variable (Select compute nodes)

When using a job submission system, it is not a priori known which compute nodes will be used for the computation, the job system will make that decision when starting the job depending on which nodes are available at that moment. The mechanism used by PBS and SLURM to pass the information about the assigned compute nodes to MPI is as follows. The job system will create a file containing a list of the signed compute nodes. The path to this file will be set as an environment variable, typically named PBS_NODEFILE. If another job submission system is used, which may use a different variable name, you can change the name of the environment variable here.

This mechanism can also be exploited to allow for a manual choice of the compute nodes, e.g. when no job submission system is used. In this case, you have to create your own "node" file. Create a text file and fill it with the names of the computers that should be used for the computation, e.g.

node001
node002
node003

The names are separated by newlines. Then, set the path to the node file as value for the PBS_NODEFILE variable. This can be done with the Linux command

export PBS_NODEFILE=<path to node file>

This command has to be executed before GeoDict is started. Then follow the Cluster Computing instructions to start a distributed simulation.

MPI Installation and MPI Path (Switch between different MPI packages)

You can switch between Automatic and Manual in the MPI Installation drop-down menu.

In the default case Automatic, GeoDict searches for a usable installed MPI and uses it. Check the Console output if you are unsure which MPI version is used:

HPC-MPISettings-Console

To change the used MPI version, select Manual and browse to the corresponding mpiexec executable of the selected MPI version.

HPC-MPISettings-Manual

MPICH-3.2 is used by default in Linux but sometimes OpenMPI-1.10.7 might be a better choice in computer cluster environments. Many clusters have InfiniBand interconnection between the compute nodes. Unfortunately, MPICH-3.2 cannot use this interconnection natively and therefore the scaling behavior might not be ideal for more than four compute nodes. OpenMPI-1.10.7 can use InfiniBand interconnections natively and provides a good scaling behavior beyond four compute nodes.

Note-Important

Important! Currently, OpenMPI 1.10.7 does not work on a cluster in combination with the FeelMath solver (ElastoDict). Use MPICH3.2 instead of OpenMPI 1.10.7.

MPI Command-line Arguments

It is possible to add additional command line arguments to the MPI call by entering them in the box. These additional command line arguments may be used to configure proper usage of the InfiniBand adapter or to show more debug information in the command line output. As an example for MPICH 3.2., the usage of the Internet Protocol over InfiniBand (IPoIB) feature can be configured with the command line argument: -iface ib0.

Install MPI

It is also possible to trigger the MPI installation script setupMPI.sh from GeoDict’s user interface, and in this case MPI will be installed locally (the root version is not available from the GUI).

©2025 created by Math2Market GmbH / Imprint / Privacy Policy