ufs-community/ccpp-physics

MPI calls are not actually used in UFS because compile doesn't set -DMPI

Opened this issue · 2 comments

Description

A few physics routines use mpi_bcast to have one thread read an input file and send the information to the other threads. But the UFS compile does not set -DMPI to enable these calls, so all threads will read the input file, since the mpi calls are preprocessed with

#ifdef MPI
<mpi calls>
#endif

The UFS make process does set -DFV3, which could be used as a hack instead of "MPI". Maybe this issue should be raised at the UFS level, instead?

A broader question here, however, is what level of MPI use is acceptable in CCPP. For example, if an init routine wants to check if a particular tracer is zero everywhere in the domain, is it OK to use mpi_allreduce, etc.? And is there a way to guarantee that the MPI capability is actually being compiled in?

@MicroTed The pre-processor flag DMPI is set to true by the UFS in compile.sh.

Additional guidance on using MPI within physics schemes can be found here: https://ccpp-techdoc.readthedocs.io/en/latest/CompliantPhysicsParams.html#parallel-programming-rules.
As for you specific question with mpi_allreduce(). Yes, mpi_allreduce() is a global communicator, and allowed in the ccpp init, timestep_init, finalize, and timestep_finalize phases.

@dustinswales Thanks for that pointers! I wonder why DMPI is set for the RT but not for the build.sh script? (At least I don't see that it is set by default for 'normal' compiling.) I'll try testing something with that.