Artefacts - patch size?
Closed this issue · 5 comments
Hi,
I am trying to use SUPPORT, but the first results I am getting look really off - an image is attached.
I have trained on my data with a different patch size, since my patches are smaller. Here is the command/settings I used:
python -m src.train --exp_name parallel1 --noisy_data folder --is_folder --results_dir D:\deepSupport\trainedModel --patch_size 61 38 38 --bs_size 3 3
In the python test code I have changed the patch size to
patch_size = [61, 38, 38] patch_interval = [1, 19, 19]
Is the change in patch size the reason the artefacts? Should I have changed anything else in the test code, or are there certain requirements for the patch size? Or is the blindspot size of 3 too big for smaller patches?
Thanks!
Hi,
For some data (especially the one that has been motion-corrected), noise could be correlated with the neighboring pixels.
In that case, a larger blind spot could help.
Trying --bp mode could also help.
Hi, thanks for the answer!
But can correlated noise explain the artefacts/white lines running across the whole image?
Is --bp mode meant for training or testing?
Yes, with the data having correlated noise, the network could overfit to the noise.
While I am not sure what the raw image looks like, it is likely that the raw image also contains line artifacts. (After denoising, such artifacts are now visible due to reducing other noise).
--bp mode is required for both training and testing. You need to train it again.
The solution to the artefact problem seems to be to only analyze videos without black padding (from motion correction)
It seems like the train procedure got malfunctioned due to padding.
Anyways, It's glad to hear that the problem is solved.