Official implementation of EL2NM: Extremely Low-light Noise Modeling Through Diffusion Iteration in pytorch. [HomePage] [Paper] [Dataset] [Checkpoints]
*(2024.03.28)*: 🎉 Our paper was accepted by IEEE/CVF International Conference on Computer Vision Workshops (CVPRW) 2024~~
- Python >=3.6, PyTorch >= 1.6
- Requirements: opencv-python, pandas, scipy, lpips, yacs4
- Platforms: Ubuntu 18.04, cuda-11.3
- Our method can run on the CPU, but we recommend you run it on the GPU
- Clone this project using:
git clone https://github.com/OptimistQAQ/EL2NM.git
- Install the dependencies using:
conda env create -f environment.yml
source activate el2nm
- Training using:
cd scripts
python train_ddpm_model.py
- Testing using:
cd scripts
python test_ddpm_model.py
If you find our code helpful in your research or work please cite our paper.
@InProceedings{Qin_2024_CVPR,
author = {Qin, Jiahao and Qin, Pinle and Chai, Rui and Qin, Jia and Jin, Zanxia},
title = {EL2NM: Extremely Low-light Noise Modeling Through Diffusion Iteration},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2024},
pages = {1085-1094}
}
Part of the code comes from other repo, please abide by the original open source license for the relevant code.