TuragaLab/DECODE

Gridding artifact in the localization predictions

tdaugird opened this issue · 3 comments

Hey all, Thanks for new tool. We are excited about pulling this into our regular use for SMLM stuff.

I've run into a little bit of a problem with the localizations. We are seeing a gridding artifact in our final rendered image:
image

It looks like there is some bias in the localizations. It appears that along the x axis, the first number after the decimal point is biased towards one. Here is a histogram of a subset of the x localizations.
image

Here is a histogram of the first number after the decimal point for the x localizations. I can't think of a good reason why this would be anything other than a uniform distribution:
image

I am seeing the same kind of thing along the y axis as well.

The model used to make these predictions was trained on Collab, using a Frankenstein version of your example fitting notebook. However, a lab mate ran into this same issue a few months back after training a model locally.

I'm not sure what additional files or info would be helpful for sorting this out. I'd be happy to help out with anything that I can on our end.

Hey Tim,
In general, those artifacts start to appear when the uncertainty of the localizations gets large, and there are multiple ways to deal with them (see https://www.biorxiv.org/content/10.1101/2020.10.26.355164v1 Extended data Fig. 8). The best way is to convolve the localizations with the correct uncertainties when rendering, this function is implemented in the master branch (decode.renderer.RendererIndividual2D, see Fitting.ipynb for an example).

Of course, while this might remove the artifacts it won't give you a sharper reconstruction. I think I played around with data from your Lab that looked similar before and it seemed to me that for the given signal-to-noise ratio and density it was not possible to achieve much better results.
However, if you think you should be able to get better results you can send me your collab notebook and I can check the settings.

Hey there, Thanks for the insight regarding rendering. It seems as though even convolving the data with the uncertainties, there is still considerable gridding artifact. This is what I would have expected given that this artifact appears to arise from the bias in the localizations that we found.

image

It seems interesting to me that the large uncertainties would cause the model to ping the localizations with this kind of bias. My background is not at all in computer science and I'll admit a rather limited understanding of neural networks.

Thanks for the offer for take a look at things. I'll shoot you a line. I just wanted to put this issue out here in case anyone else encounters it. I'll make sure to follow up here as well with anything else that I uncover.

Just adding to that:
Those artefacts can also appear when the simulation does not match the experimental data well. This could be due to a bad PSF calibration (or even flipped/mirrored psf etc.), wrong background range. I haven't seen the data, but I would definitely look into that and make sure everything is correctly set up.
If the data is super low snr or super dense these artifacts can of course appear, but they look unexpectedly striking in this example.