MPI & OpenMP partitioning
raulleoncz opened this issue · 1 comments
Hello,
I've been using specfem2d with MPI and until now it has worked correctly. Now that I trying to move to other softwares specialized in inversion like adjtomo/seisflows that uses specfem2d, I have found some issues related to how the domain is partitioned.
- When we use mpi, I understand that the domain is divided between the processors so each processor has a number of elements but this is not equitable, therefore, each processor can have a different number of elements, for example:
Based on the last image, when I use seisflows there's the following error:
This image says '20 processors' because it corresponds to another example but the same error occurs for any simulation with NPROC>1. Basically, it cannot work the problem unless each processor has the same amount of elements.
I commented this issue in seisflows's repository, but the error has been difficult to solve. Do you have any idea of how we should deal with it? Could you give me your opinion about it?
- Another option that comes to my mind is to use OpenMP instead of OpenMPI, but I do not know if OpenMP has been used in specfem2d or if it is a good idea. I tried to compile specfem2d with OpenMP using the command: ./configure --enable-openmp but the compilation failed. The error that I got is the following:
Is there any way to solve this?
I am sorry for the inconvenience but I hope you can help me. Thank you so much.
thanks for pointing out!
regarding the OpenMP issue, please try again with the latest devel branch version of SPECFEM2D. it should have been fixed by PR #1215. also note that OpenMP is only supported for viscoelastic domains.