This repository hosts our implementation of Probabilistic Noise2Void. The original paper can be found at https://www.frontiersin.org/articles/10.3389/fcomp.2020.00005/full, a preprint at https://arxiv.org/abs/1906.00651. PN2V is self-supervised CNN-based-denoising that achieves results close to state-of-the-art methods, but requires only individual noisy images for training. Requirements:
PN2V estimates the posterior distribution for every pixel. We can sample from this posterior to get a feeling for the uncertainty on different regions of the image. Here, we independently draw multiple samples for each pixel and display the result as an animation. In the resulting gifs, regions with high uncertainty are characterized by stronger fluctuations.
Checkout our example notebooks. Please use them in the order stated below:
- Creating a noise model: Convallaria-1-CreateNoiseModel.ipynb
- Training a network: Convallaria-2-Training.ipynb
- Predictig, i.e. denoising images: Convallaria-3-Prediction.ipynb
Mouse example from Zahng et al. 2019 (https://github.com/bmmi/denoising-fluorescence)
- Downloading the data: Mouse-0-GetData.ipynb
- Creating a noise model: Mouse-1-CreateNoiseModel.ipynb
- Training a network: Mouse-2-Training.ipynb
- Predictig, i.e. denoising images: Mouse-3-Prediction.ipynb
PN2V now includes also N2V functionallity. See the following notbooks for an example on how to use it.