/mpich-cvs

Primary LanguageCOtherNOASSERTION

			 MPICH2 Release 1.0.6

MPICH2 is an all-new implementation of MPI from the group at Argonne
National Laboratory.  It shares many goals with the original MPICH but
no actual code.  It is a portable, high-performance implementation of
the entire MPI-2 standard.  This release has all MPI-2 functions and
features required by the standard with the exception of support for the
"external32" portable I/O format. 

The distribution has been tested by us on a variety of machines in our
own environments. If you have problems, please report them to
mpich2-maint@mcs.anl.gov.

This README file should contain enough information to get you started
with MPICH2.  More extensive installation and user guides can be found
in the doc/installguide/install.pdf and doc/userguide/user.pdf files
respectively.  Additional information regarding the contents of the
release can be found at the end of this README under "Status of MPI-2
Features in MPICH2", in the CHANGES file in the top-level directory, and
in the RELEASE_NOTES file, where certain restrictions are detailed.
Finally, the MPICH2 web site, http://www.mcs.anl.gov/mpi/mpich2,
contains information on bug fixes and new releases.
  
Windows users should see the file README.windows in this directory. 


Getting Started
===============

The following instructions take you through a sequence of steps to get
the default configuration (TCP communication, MPD process management) of
MPICH2 up and running.  Alternate configuration options are described
later, in the section "Alternative configurations".  

1.  You will need the following prerequisites.

    - This tar file mpich2-1.0.6.tar.gz

    - A C compiler (gcc is sufficient)

    - A Fortran compiler if Fortran applications are to be used
      (g77 or gfortran is sufficient) 

    - A C++ compiler for the C++ MPI bindings (g++ is sufficient)

    - Python 2.2 or later (for the default MPD process manager)

    - If a Fortran 90 compiler is found, by default MPICH2 will
      attempt to build a basic MPI module.  This module contains the
      MPI routines that do not contain "choice" arguments; i.e., the module
      does not contain any of the communication routines, such as
      MPI_Send, that can take arguments of different type.  You may still
      use those routines, however, the MPI module does not contain
      interface specifications for them. If you have trouble with the
      configuration step and do not need Fortran 90, configure with
      --disable-f90 . 

    Configure will check for these prerequisites and try to work around
    deficiencies if possible.  (If you don't have Fortran, you will
    still be able to use MPICH2, just not with Fortran applications.)

    Also, you need to know what shell you are using since different shell
    has different command syntax. Command "echo $SHELL" prints out the
    current shell used by your terminal program.

2.  Unpack the tar file and go to the top level directory:

      tar xfz mpich2.tar.gz
      cd mpich2-1.0.6

    If your tar doesn't accept the z option, use

      gunzip mpich2.tar.gz
      tar xf mpich2.tar
      cd mpich2-1.0.6

3.  Choose an installation directory (the default is /usr/local/bin):

      mkdir /home/you/mpich2-install

    It will be most convenient if this directory is shared by all of the
    machines where you intend to run processes.  If not, you will have
    to duplicate it on the other machines after installation.

4.  Configure MPICH2, specifying the installation directory:

    for csh and tcsh:

      ./configure --prefix=/home/you/mpich2-install |& tee c.txt

    for bash and sh:

      ./configure --prefix=/home/you/mpich2-install 2>&1 | tee c.txt

    Bourne-like shells, sh and bash, accept "2>&1 |".  Csh-like shell,
    csh and tcsh, accept "|&".  File c.txt is used to store all messages
    generated configure command and is useful for diagnosis if something
    goes wrong.  Other configure options are described below.  You might
    also prefer to do a VPATH build (see below).  Check the c.txt file
    to make sure everything went will.  Problems should be self-explanatory,
    but if not, sent c.txt to mpich2-maint@mcs.anl.gov.

5.  Build MPICH2:

    for csh and tcsh:

      make |& tee m.txt

    for bash and sh:

      make 2>&1 | tee m.txt

    This step should succeed if there were no problems with the
    preceding step.  Check file m.txt.  If there were problems, send
    m.txt to mpich2-maint@mcs.anl.gov.

6.  Install the MPICH2 commands:

    for csh and tcsh:

      make install |& tee mi.txt

    for bash and sh:

      make install 2>&1 | tee mi.txt

    This step collects all required executables and scripts in the bin
    subdirectory of the directory specified by the prefix argument to
    configure. 

7.  Add the bin subdirectory of the installation directory to your path:

    for csh and tcsh:

      setenv PATH /home/you/mpich2-install/bin:$PATH

    for bash and sh:
  
      PATH=/home/you/mpich2-install/bin:$PATH ; export PATH

    Check that everything is in order at this point by doing 

      which mpd
      which mpiexec
      which mpirun

    All should refer to the commands in the bin subdirectory of your
    install directory.  It is at this point that you will need to
    duplicate this directory on your other machines if it is not
    in a shared file system such as NFS.

8.  MPICH2, unlike MPICH, uses an external process manager for scalable
    startup of large MPI jobs.  The default process manager is called
    MPD, which is a ring of daemons on the machines where you will run
    your MPI programs.  In the next few steps, you will get his ring up
    and tested.  More details on interacting with MPD can be found in
    the README file in mpich2-1.0.6/src/pm/mpd, such as how to list
    running jobs, kill, suspend, or otherwise signal them, and how to
    debug programs with "mpiexec -gdb".

    If you have problems getting the MPD ring established, see the
    Installation Guide for instructions on how to diagnose problems
    with your system configuration that may be preventing it.  Also
    see that guide if you plan to run MPD as root on behalf of users.
    Please be aware that we do not recommend running MPD as root until
    you have done testing to make sure that all is well.

    Begin by placing in your home directory a file named .mpd.conf
    (/etc/mpd.conf if root), containing the line 

      secretword=<secretword>

    where <secretword> is a string known only to yourself.  It should
    NOT be your normal Unix password.  Make this file readable and
    writable only by you:

      chmod 600 .mpd.conf

9.  The first sanity check consists of bringing up a ring of one mpd on
    the local machine, testing one mpd command, and bringing the "ring"
    down. 

      mpd &
      mpdtrace
      mpdallexit

    The output of mpdtrace should be the hostname of the machine you are
    running on.  The mpdallexit causes the mpd daemon to exit.
    If you have problems getting the mpd ring established, see the
    Installation Guide for instructions on how to diagnose problems
    with your system configuration that may be preventing it.

10. Now we will bring up a ring of mpd's on a set of machines.  Create a
    file consisting of a list of machine names, one per line.  Name this
    file mpd.hosts.  These hostnames will be used as targets for ssh or
    rsh, so include full domain names if necessary.  Check that you can
    reach these machines with ssh or rsh without entering a password.
    You can test by doing

      ssh othermachine date

    or

      rsh othermachine date

    If you cannot get this to work without entering a password, you will
    need to configure ssh or rsh so that this can be done, or else use
    the workaround for mpdboot in the next step.

11. Start the daemons on (some of) the hosts in the file mpd.hosts

      mpdboot -n <number to start>  

    The number to start can be less than 1 + number of hosts in the
    file, but cannot be greater than 1 + the number of hosts in the
    file.  One mpd is always started on the machine where mpdboot is
    run, and is counted in the number to start, whether or not it occurs
    in the file.

    There is a workaround if you cannot get mpdboot to work because of
    difficulties with ssh or rsh setup.  You can start the daemons "by
    hand" as follows:

       mpd &                # starts the local daemon
       mpdtrace -l          # makes the local daemon print its host
                            # and port in the form <host>_<port>

    Then log into each of the other machines, put the install/bin
    directory in your path, and do:

       mpd -h <hostname> -p <port> &

    where the hostname and port belong to the original mpd that you
    started.  From each machine, after starting the mpd, you can do 

       mpdtrace

    to see which machines are in the ring so far.  More details on
    mpdboot and other options for starting the mpd's are in
    mpich2-1.0.6/src/pm/mpd/README.

 !! ***************************
    If you are still having problems getting the mpd ring established,
    you can use the mpdcheck utility as described in the Installation Guide 
    to diagnose problems with your system configuration.
 !! ***************************

12. Test the ring you have just created:

      mpdtrace

    The output should consist of the hosts where MPD daemons are now
    running.  You can see how long it takes a message to circle this
    ring with 

      mpdringtest

    That was quick.  You can see how long it takes a message to go
    around many times by giving mpdringtest an argument:

      mpdringtest 100
      mpdringtest 1000

13. Test that the ring can run a multiprocess job:

      mpiexec -n <number> hostname

    The number of processes need not match the number of hosts in the
    ring;  if there are more, they will wrap around.  You can see the
    effect of this by getting rank labels on the stdout:

      mpiexec -l -n 30 hostname

    You probably didn't have to give the full pathname of the hostname
    command because it is in your path.  If not, use the full pathname:

      mpiexec -l -n 30 /bin/hostname

14. Now we will run an MPI job, using the mpiexec command as specified
    in the MPI-2 standard.  There are some examples in the install
    directory, which you have already put in your path, as well as in
    the directory mpich2-1.0.6/examples.  One of them is the classic
    cpi example, which computes the value of pi by numerical
    integration in parallel.

      mpiexec -n 5 cpi

    The number of processes need not match the number of hosts.
    The cpi example will tell you which hosts it is running on.
    By default, the processes are launched one after the other on the hosts
    in the mpd ring, so it is not necessary to specify hosts when running a
    job with mpiexec.

    There are many options for mpiexec, by which multiple executables
    can be run, hosts can be specified (as long as they are in the mpd
    ring), separate command-line arguments and environment variables can
    be passed to different processes, and working directories and search
    paths for executables can be specified.  Do

      mpiexec --help

    for details. A typical example is:

      mpiexec -n 1 master : -n 19 slave

    or

      mpiexec -n 1 -host mymachine : -n 19 slave

    to ensure that the process with rank 0 runs on your workstation.

    The arguments between ':'s in this syntax are called "argument
    sets", since they apply to a set of processes.  Some arguments,
    called "global", apply across all argument sets and must appear
    first.  For example, to get rank labels on standard output, use

      mpiexec -l -n 3 cpi

    See the User's Guide for much more detail on arguments to mpiexec.

    The mpirun command from the original MPICH is still available,
    although it does not support as many options as mpiexec.

If you have completed all of the above steps, you have successfully
installed MPICH2 and run an MPI example.  

More details on arguments to mpiexec are given in the User's Guide in
the doc subdirectory.  Also in the User's Guide you will find help on
debugging.  MPICH2 has some some support for the TotalView debugger, as
well as some other approaches described there.


Alternatives
============

The above steps utilized the MPICH2 defaults, which included choosing
TCP for communication (the "sock" channel) and the MPD process manager.
Other alternatives are available.  You can find out about configuration
alternatives with

   ./configure --help

in the mpich2 directory.  The alternatives described below are
configured by adding arguments to the configure step.


Compiler Optimization Levels
============================

By default, from version 1.0.6 onwards the MPICH2 library is built
with the -O2 optimization level if it is available. The mpicc and
other scripts that are used to compile applications do not include any
optimization flag by default. If you want to build the MPICH2 library
with a specific optimization level, set the environment variable
CFLAGS to that level before running configure. If you do not want this
CFLAGS value to be included in the mpicc script, set the environment
variable MPI_CFLAGS to MPI_FLAGS_EMPTY. For example, to build a
"production" MPICH2 in a GNU environment, you may want to do:
     setenv CFLAGS -O3
     setenv MPI_CFLAGS MPI_FLAGS_EMPTY
before running configure. This will cause the MPICH2 library to be
built with -O3, but -O3 will not be included in the mpicc script.


Alternative Process Managers
============================

mpd
---
MPD is the default process manager.  Its setup and use have been
described above.  The file mpich2-1.0.6/src/pm/mpd/README has more 
information about interactive commands for managing the ring of MPDs.

smpd
---- 
SMPD is a process management system for both Microsoft Windows and UNIX.
SMPD is capable of starting a job where some processes are running on
Windows and others are running on a variant of UNIX.  For more
information, please see mpich2-1.0.6/src/pm/smpd/README. 

gforker
-------
gforker is a process manager that creates processes on a single machine,
by having mpiexec directly fork and exec them.  This mechanism is
particularly appropriate for shared-memory multiprocessors (SMPs) where
you want to create all the processes on the same machine.  gforker is
also useful for debugging, where running all the processes on a single
machine is often convenient.


Alternative Channels and Devices
================================

The communication mechanisms in MPICH2 are called "devices", paired with
specific "channels".  The most thoroughly tested device is the "ch3"
device.  The default configuration chooses the "sock" channel in the ch3
device (all communication goes over TCP sockets), which would be
specified explicitly by putting

  --with-device=ch3:sock

on the configure command line.  The ch3 device has two other channels
which are rigorously tested: "shm" (shared memory) for use on SMPs (all
communication goes through shared memory instead of over TCP sockets)
and "ssm" (sockets and shared memory) for use on clusters of SMPs
(communication between processes on the same machine goes through shared
memory; communication between processes on different machines goes over
sockets).  Configure these by putting

  --with-device=ch3:shm

or 

  --with-device=ch3:ssm

on the configure command line.

A new channel supports the dynamic loading of other channels.  To use this
channel, configure with

  --with-device=ch3:dllchan:sock,shm,ssm 

(This provides the sock, shm, and ssm channels as options, with sock being 
the default.)  In addition, you must specify the shared library type; under
Linux and when using gcc (or compilers that mimic gcc for shared-library 
construction) add

   --enable-sharedlibs=gcc

On Mac OSX, use

   --enable-sharedlibs=gcc-osx

This is an experimental channel in the 1.0.6 release.  To select a channel
other than the default channel, set the environment variable MPICH_CH3CHANNEL
to the channel name (i.e., sock, shm, or ssm).  If the process manager is
gforker, you can also use a commandline option to mpiexec; specify
-channel=name, as in

    mpiexec -n 4 -channel=shm a.out

There are known problems with this channel, particularly during the make step.
You may find that some symbols are not found when loading the libraries.  
If you want to try this experimental channel, please let us know what does
and does not work.  The next release will have a more robust, 
ready-for-production, version of this channel.


The sshm (scalable shared memory) channel is not supported for
1.0.6. The code has been retained to provide an example of some MPI-2
RMA optimizations (the immediate method as opposed to the deferred
method used in other channels for implementing the synchronization in
MPI-2 one-sided communication). 

The InfiniBand (ib) channel has not been kept up to date in a long time
and hence we have not included it in this release. If you need to use
MPI on InfiniBand, we recommend using MVAPICH2 or MVAPICH from Ohio
State Univ.   http://nowlab.cse.ohio-state.edu/projects/mpi-iba/

The Nemesis channel is a new low-latency channel that uses shared
memory for intra-node communication and various networks for
inter-node communication.  It currently supports TCP, GM, MX and
Qsnet/Elan networks.  Other networks will be supported in the future.
The Nemesis channel may be selected using the
--with-device=ch3:nemesis configure option.  The default network is
TCP.

  To use the GM network, use the --with-device=ch3:nemesis:gm
  configure option.  If the GM include files and libraries are not in
  the normal search paths, you can specify them with the
  --with-gm-include= and --with-gm-lib= options, or the --with-gm=
  option if lib/ and include/ are in the same directory.

  To use the MX network, use the --with-device=ch3:nemesis:mx
  configure option.  If the MX include files and libraries are not in
  the normal search paths, you can specify them with the
  --with-mx-include= and --with-mx-lib= options, or the --with-mx=
  option if lib/ and include/ are in the same directory.
  (Note : There are known performance issues with this module.)

  To use the Qsnet/Elan network, use the --with-device=ch3:elan 
  configure option. Specific paths can be specified with:
  --with-elan-include= (path to the elan include files),
  --with-elan-lib= (path to the elan libs) and (if needed)
  --with-qsnet-include= (path to the qsnet include files).
  (Note : This module is still in *experimental* state. It has not 
  been thoroughly tested and performance issues remain.)

  Note that if the GM, MX or Elan libraries are shared libraries, they
  need to be in the shared library search path.  This can be done by
  adding the path to /etc/ld.so.conf, or by setting the
  LD_LIBRARY_PATH variable in your .bashrc (or .tcshrc) file.  It's
  also possible to set the shared library search path in the binary.
  If you're using gcc, you can do this by adding 
    LD_LIBRARY_PATH=/path/to/lib 
  and
    MPI_LDFLAGS="-Wl,-rpath -Wl,/path/to/lib"
  as arguments to configure.

  Using the --enable-fast configure option significantly improves
  intra-node performance.

  The --with-shared-memory= configure option allows you to choose how
  Nemesis allocates shared memory.  The options are "auto", "sysv",
  and "mmap".  Using "sysv" will allocate shared memory using the
  System V shmget(), shmat(), etc. functions.  Using "mmap" will
  allocate shared memory by creating a file (in /dev/shm if it exists,
  otherwise /tmp), then mmap() the file.  The default is "auto".  Note
  that System V shared memory has limits on the size of shared memory
  segments so using this for Nemesis may limit the number of processes
  that can be started on a single node.

Nemesis is still a work in progress. Dynamic processes and
connect/accept are not yet implemented.  Performance in some areas is
still suboptimal.


The SCTP channel is a new channel using the Stream Control
Transmission Protocol.  SCTP is a new transport protocol available on
most operating systems using standard commodity hardware.  For some
background, this article answers "Why is SCTP needed given TCP and UDP
are widely available?" :
http://www.isoc.org/briefings/017/index.shtml

This channel supports regular MPI-1 operations as well as dynamic
processes and RMA from MPI-2; it currently does not offer support for
multiple threads.

Configure the sctp channel by putting

--with-device=ch3:sctp

on the configure command line.

If the SCTP include files and libraries are not in the normal search
paths, you can specify them with the --with-sctp-include= and
--with-sctp-lib= options, or the --with-sctp=  option if lib/ and
include/ are in the same directory.

SCTP stack specific instructions:

  For FreeBSD 7 and onward, SCTP comes with CURRENT and is enabled with
  the "option SCTP" in the kernel configuration file.  The sctp_xxx()
  calls are contained within libc so to compile ch3:sctp, make a soft-link
  named libsctp.a to the target libc.a, then pass the path of the 
  libsctp.a soft-link to --with-sctp-lib.
  
  For FreeBSD 6.x, kernel patches and instructions can be downloaded at
  http://www.sctp.org/download.html .  These kernels place libsctp and
  headers in /usr, so nothing needs to be specified for --with-sctp
  since /usr is often in the default search path.

  For Mac OS X, the SCTP Network Kernel Extension (NKE) can be
  downloaded at http://sctp.fh-muenster.de/sctp-nke.html .  This places
  the lib and include in /usr, so nothing needs to be specified for
  --with-sctp since /usr is often in the default search path.

  For Linux, SCTP comes with the default kernel from 2.4.23 and later as
  a module.  This module can be loaded as root using "modprobe sctp".
  After this is loaded,  you can verify it is loaded using "lsmod".
  Once loaded, the SCTP socket lib and include files must be downloaded
  and installed from http://lksctp.sourceforge.net/ .  The prefix 
  location must then be passed into --with-sctp.  This bundle is called 
  lksctp-tools and is available for download off their website.

  For Solaris, SCTP comes with the default Solaris 10 kernel; the lib
  and include in /usr, so nothing needs to be specified for --with-sctp
  since /usr is often in the default search path.  In order to compile
  under Solaris, CFLAGS must have -DMPICH_SCTP_CONCATENATES_IOVS set
  when running MPICH2's configure script.

MPICH2 v1.0.5 was the initial release of the SCTP channel, so feedback is
definitely still welcomed.  All MPI features except multithreading are 
supported.  Some performance optimizations could surely exist for this 
channel and will be investigated for future releases.


VPATH Builds
============

MPICH2 supports building MPICH in a different directory tree than the
one where the MPICH2 source is installed.  This often allows faster
building, as the sources can be placed in a shared filesystem and the
builds done in a local (and hence usually much faster) filesystem.  To
make this clear, the following example assumes that the sources are
placed in /home/me/mpich2-<VERSION>, the build is done in
/tmp/me/mpich2, and the installed version goes into
/usr/local/mpich2-<VERSION>:

  cd /home/me
  tar xzf mpich2-<VERSION>.tar.gz
  cd /tmp/me
  # Assume /tmp/me already exists
  mkdir mpich2
  cd mpich2
  /home/me/mpich2-<VERSION>/configure --prefix=/usr/local/mpich2-<VERSION>
  make
  make install



Shared Libraries
================

Shared libraries are currently only supported for gcc on Linux and
Mac and for cc on Solaris. To have shared libraries created when
MPICH2 is built, specify the following when MPICH2 is configured:

    configure --enable-sharedlibs=gcc         (on Linux)
    configure --enable-sharedlibs=osx-gcc     (on Mac OS X)
    configure --enable-sharedlibs=solaris-cc  (on Solaris)



Optional Features
=================

MPICH2 has a number of optional features.  If you are exploring MPICH2
as part of a development project the following configure options are
important:

Performance Options:

 --enable-fast - Turns off error checking and collection of internal
                 timing information

 --enable-timing=no - Turns off just the collection of internal timing
                 information

MPI Features:

  --enable-romio - Build the ROMIO implementation of MPI-IO.  This is
                 the default

  --with-file-system - When used with --enable-romio, specifies
                 filesystems ROMIO should support.  See README.romio.

  --enable-threads - Build MPICH2 with support for multi-threaded
                 applications. Only the sock and nemesis channels support
                 MPI_THREAD_MULTIPLE. 

  --with-thread-package - When used with --enable-threads, this option
                 specifies the thread package to use.  This option
                 defaults to "posix".  At the moment, only POSIX
                 threads are supported on UNIX platforms.  We plan to
                 support Solaris threads in the future.

Language bindings:

  --enable-f77 - Build the Fortran 77 bindings.  This is the default.
                 It has been tested with the Fortran parts of the Intel
                 test suite.

  --enable-f90 - Build the Fortran 90 bindings.  This is not on by
                 default, since these have not yet been tested.

  --enable-cxx - Build the C++ bindings.  This has been tested with the
                 Notre Dame C++ test suite and some additional tests.

Cross compilation:

  --with-cross=filename - Provide values for the tests that required
                 running a program, such as the tests that configure
                 uses to determine the sizes of the basic types.  This
                 should be a fine in Bourne shell format containing
                 variable assignment of the form

                     CROSS_SIZEOF_INT=2

                 for all of the CROSS_xxx variables.  A list will be
                 provided in later releases; for now, look at the
                 configure.in files.  This has not been completely
                 tested.

Error checking and reporting:

  --enable-error-checking=level - Control the amount of error checking.
                 Currently, only "no" and "all" is supported; all is the
                 default.

  --enable-error-messages=level - Control the aount of detail in error
                 messages.  By default, MPICH2 provides
                 instance-specific error messages; but, with this
                 option, MPICH2 can be configured to provide less
                 detailed messages.  This may be desirable on small
                 systems, such as clusters built from game consoles or
                 high-density massively parallel systems.  This is still
                 under active development.

Compilation options for development:

  --enable-g=value - Controls the amount of debugging information
                 collected by the code.  The most useful choice here is
                 dbg, which compiles with -g.

  --enable-coverage - An experimental option that enables GNU coverage
                 analysis.

  --with-logging=name - Select a logging library for recording the
                 timings of the internal routines.  We have used this to
                 understand the performance of the internals of MPICH2.
                 More information on the logging options, capabilities
                 and usage can be found in doc/logging/logging.pdf.

  --enable-timer-type=name -  Select the timer to use for MPI_Wtime
                 and internal timestamps.  name may be one of:
                     gethrtime        - Solaris timer (Solaris systems
                                        only) 
                     clock_gettime    - Posix timer (where available)
                     gettimeofday     - Most Unix systems
                     linux86_cycle    - Linux x86; returns cycle
                                        counts, not time in seconds*
                     linuxalpha_cycle - Like linux86_cycle, but for
                                        Linux Alpha* 
                     gcc_ia64_cycle   - IPF ar.itc timer*
                     device           - The timer is provided by the device
                 *Note that the cycle timers are intended to be used by
                  MPICH2 developers for internal low-level timing.
                  Normal users should not use these as they are not
                  guaranteed to be accurate in certain situations.



Status of MPI-2 Features in MPICH2
==================================

MPICH2 includes all of MPI-1 and the following parts of MPI-2:

MPI-I/O, except for the external data representations (e.g., MPICH2
includes all of ROMIO)

Active-target one-sided communication is implemented. Passive target
one-sided communication (with MPI_Win_lock and MPI_Win_unlock) is
implemented but relies on MPI functions at the target for progress to
occur (the function could well be MPI_Win_free).  The one exception is
in the sshm channel when window memory is allocated with
MPI_Alloc_mem. In this case, communication happens without any action
from the target.

The dynamic process management routines (MPI_Comm_spawn,
MPI_Comm_Connect, and MPI_Comm_Accept, and their relations) are
supported, but only for the sock and ssm channels. 

The "singleton init" feature, whereby a process not started by mpiexec
can become an MPI process, is supported under the mpd process manager.

Some routines, such as the intercommunicator extensions to the
collective routines, have not been extensively tested.

Only the sock and nemesis channels support MPI_THREAD_MULTIPLE.