Poor performance
yashkhurana24 opened this issue · 5 comments
Hi!
I am not too surprised with the result. N2V output will be smoothed and there is potentially not enough information in the degraded image.
You could try N2V2 to get rid of the small checkboard artefact (seems to be present on the denoised image). Otherwise if you want to obtain a better image with N2V, the only thing you can do is to have more data for a better structural prior and noise model. That means having very similar images, degraded in the same way, and adding them to the training set.
Otherwise, you can try other algorithms such as PPN2V or DivNoising/HDN. But they are more difficult to use (we are working on it).
@jdeschamps Hello, thanks for the quick response! I had some additional doubts, if you could answer those.
- What are the reasons for the over-smoothening in N2V? Can it be because there is too much low-intensity pixels in the image? (Attaching the full original image for reference)
- Do you think N2V would perform better with a degraded image with lesser noise?
- So N2V denoised images will always be "smoothed", because the in the presence of noise (all the more when the signal to noise ratio is very low, ie limited information) there are multiple possible denoised images that are equivalent. N2V converges to some sort of average of the possible denoised images, hence the smoothing. You can checkout the DivNoising paper to see what a different network, with the capacity to sample the possible denoised images, can give and how the average denoised image looks like.
- If you have little noise, then N2V adds little value. It is a great algorithm when the image is very degraded because there are no way to classically get a good estimation of the structures. But you have to live with the fact that where there is too little information, you will get a blur.
May I ask what you are trying to do?
Do you think if the image had a higher brightness (more details visible), the model would perform better?
What I'm trying to do:- I have some elemental maps from x-ray fluorescence imagery (poisson or shot noise). I'm trying to add noise to them with the formula that I mentioned above to simulate them being captured at a lower dwell time and then denoising them with N2V to come back to the original. Please let me know if you know any other models that might work for this usecase, or if you have any other advice?
Hi @yashkhurana24,
The advantage of Noise2Void is that you don't have to generate pairs of low- and high-quality images to train a model. However, if you go through the process of obtaining (for example simulating) low-quality images, you could also just train a classical CARE model. But the difficult part is to get the noise simulation right i.e. your simulated noise as close as possible to the real noise.
If you don't want to simulate noisy acquisitions you can simply acquire noisy images and train Noise2Void with them. But as @jdeschamps pointed out, you want to have a large dataset to train on. Only then N2V can learn a good prior. But again, you will always get a slightly blurrier version.