unicfdlab/hybridCentralSolvers

Some error when running parallel on OpenFOAM-v2212

16-1895 opened this issue · 9 comments

Hello,
I ran a high-speed combustion case in parallel successfully with reactingPimpleCentralFoam on OpenFOAMv1912. Now I need to run it on v2212 for further work. I get an error. (*** Error in `reactingPimpleCentralFoam': malloc(): memory corruption: 0x0000000007cece30 ***). When I run it with a serial calculation, there is no error. I also tested it with reactingFoam(v2212) and pimpleCentralFoam(v2212) in parallel, there is no error. Do you know how to solve it?
Here is my case with the error log.
case.zip

I also tested Tutorials/shockTubeTwoGases case (v2212). The serial calculation ran well, but the parallel calculation crashed midway. (I only changed the numberOfSubdomains)
logerror.txt
I am confused now. Can anyone help me? Thanks in advance!

Hi, thank you. I'll check source code. That seems like a strange behaviour. We checked parallel runs many times.

I also tested Tutorials/shockTubeTwoGases case (v2212). The serial calculation ran well, but the parallel calculation crashed midway. (I only changed the numberOfSubdomains) logerror.txt I am confused now. Can anyone help me? Thanks in advance!

Looks very similar to a numerical instability

Hello, I ran a high-speed combustion case in parallel successfully with reactingPimpleCentralFoam on OpenFOAMv1912. Now I need to run it on v2212 for further work. I get an error. (*** Error in `reactingPimpleCentralFoam': malloc(): memory corruption: 0x0000000007cece30 ***). When I run it with a serial calculation, there is no error. I also tested it with reactingFoam(v2212) and pimpleCentralFoam(v2212) in parallel, there is no error. Do you know how to solve it? Here is my case with the error log. case.zip

Can you try your case with OpenFOAM-2112? It looks like this version works OK and changes are not significant comparing to 2212. I think the problem is with OF.

The problem comes from this part of the code (YEqn.H, lines 224-240):

forAll(maxDeltaY.boundaryField(), iPatch)
{
if (maxDeltaY.boundaryField()[iPatch].coupled())
{
scalarField intF = maxDeltaY.boundaryField()[iPatch].primitiveField();
scalarField intH = hLambdaCoeffs.boundaryField()[iPatch];
const scalarField& intL = lambdaCoeffs.boundaryField()[iPatch].primitiveField();
forAll(intF, iFace)
{
if (intF[iFace] > 0.05)
{
intH[iFace] = intL[iFace];
}
}
hLambdaCoeffs.boundaryFieldRef()[iPatch].operator = (intH);
}
}

Try to comment it and let me know if this helps

The problem comes from this part of the code (YEqn.H, lines 224-240):

forAll(maxDeltaY.boundaryField(), iPatch) { if (maxDeltaY.boundaryField()[iPatch].coupled()) { scalarField intF = maxDeltaY.boundaryField()[iPatch].primitiveField(); scalarField intH = hLambdaCoeffs.boundaryField()[iPatch]; const scalarField& intL = lambdaCoeffs.boundaryField()[iPatch].primitiveField(); forAll(intF, iFace) { if (intF[iFace] > 0.05) { intH[iFace] = intL[iFace]; } } hLambdaCoeffs.boundaryFieldRef()[iPatch].operator = (intH); } }

Try to comment it and let me know if this helps

Hi, it helps. Both my case and shockTubeTwoGases can run steadily in parallel. Is this the final solution?

It looks like something has changed in inter-processor boundaries handling. I think, you can proceed with the curent solution. I'll check what particularly has changed on weekend and then will write here.

OK, thanks for your reply.

Hi, I did an amendment, it is available in my repository. I think, @unicfdlab will merge it soon. Thank you for reporting the bug!