gdtk-uq/gdtk

Segmentation fault when setting config.shock_detector_smoothing

Closed this issue · 8 comments

As the title, whenever I try to set config.shock_detector_smoothing=1 in my script file I encounter a segmentation fault. Script and mesh attached. Not sure if this is an Eilmer4 problem or user error!
waverider-compression.zip

Problem is a shock fitting mesh over a Power Law nose cone. Mesh generated in Pointwise.

The issue is with the InFlowBC_ShockFitting boundary condition. The Shock Fitting boundary condition does not use ghost cells and we use ghost cells in the detector smoothing process. I have an idea for a fix but in the meantime I suspect you should be fine running this case without the smoothing.

Hi Jeremy,

It looks like you've hit a bug that occurs with a particular combination of inputs, in this case: shock_detector_smoothing=true with a shock fitting boundary condition present. The issue is that the shock fitting boundary condition has no ghost cells, and the smoothing routines go blindly ahead and assume that every face in the grid has both a left_cell and a right_cell and interrogate them both looking for a shock being present.

The reason this issue hasn't been noticed is that it is quite unusual to be using the smoothing on a shock-fitted flow. The general goal of shock fitting is that the outer boundary of the domain IS the shock, whereas the shock detector is normally used for shocks inside the fluid domain that need to be capture by the numerical scheme.

Are you sure you need the smoother for this problem?

In any case, I'll push fix for the code to prevent a segfault when this combination of inputs is entered. That certainly shouldn't be happening.

Thanks for that. Shock smoothing is not necessary either I don't think, so I will disable. But I am continually running into this error Too many bad cells following implicit gasdynamic update with moving grid. That was what I was trying to resolve.

For example this is the nose region after the grid moves...all is okay.
image

Some timesteps later I get false shock readings in the nose region, which causes the mesh to move erroneously and the simulation subsequently goes tits up.
image

Most of my config options are as follows. Admittedly my CFL values might be too high after the grid moves.

-- ALL CONFIG OPTIONS
body_flow_time = l/u
print("body_flow_time=", body_flow_time)
config.flux_calculator = "adaptive_hanel_ausmdv"
config.dimensions = 2
config.axisymmetric = true

config.max_time = 4*body_flow_time
config.max_step = 400000
config.dt_init = 1.0e-8

config.dt_plot = 0.5*body_flow_time
config.shock_fitting_delay = 1.2*body_flow_time
config.grid_motion = "shock_fitting"

config.max_invalid_cells = 20
config.adjust_invalid_cell_data = true
config.report_invalid_cells = false

config.gasdynamic_update_scheme = "backward_euler"
config.cfl_schedule = {{0.0, 10.0}, {1.0*body_flow_time, 10.0}, {1.2*body_flow_time, 0.5}, {3.0*body_flow_time, 10.0}}
--config.cfl_value = 8.0

config.interpolation_order = 2
config.interpolation_delay = 1.1*body_flow_time
config.compression_tolerance = -0.4
config.shear_tolerance = 0.3

Does it go back when the CFL schedule steps back up to 10? Personally I prefer to use the "moving_grid_2_stage" gasdynamic update scheme, but I don't have that much experience with implicit grid motion. The grid motion obviously makes the numerics more sensitive so my go to when things aren't working is to drop the CFL a bit. Perhaps unintuitively, the shock detector is actually completely unrelated to the shock fitting algorithm and may actually work against eachother.

The shock detector adds dissipation to internal shocks, while the shock fitting algorithm uses the Rankine-Hugoniot relations at the inflow boundary to calculate a shock speed.

Yeah after some tweaking it looks like the errors post shock fitting may be tied to slowly ramping up the CFL number again. I am only using implicit since I'm being lazy and want the simulation done quickly. It is only meant to be a simple inviscid solution with a no slip wall.

@Whyborn what CFL range to you recommend with the moving_grid_2_stage ? If this persists I will try explicit methods. Otherwise I think this issue can be closed since my errors are unrelated to the segmentation fault.

I usually use a CFL of 0.5 for the explicit scheme. I'm not sure how long the simulation is taking on your machine, but hat you could do is start with the implicit with high CFL until you start the shock fitting, reduce it until the shock is nicely aligned and then make sure you write a flow solution at this time. Then you can restart from this snapshot using --tindx-start=x and adjust the CFL directly in the config file before each run, to see how high you can get it without it crashing.

So for the configuration you posted above, just restart the simulation with --tindx-start=6 and adjust the CFL values in the .control file.

Thanks guys! I'll close this issue out.

This issue resulted in commit 6de33cd, which has corrected the problem.