/EL2NM

Official implementation of EL2NM: Extremely Low-light Noise Modeling Through Diffusion Iteration in pytorch.

Primary LanguagePythonApache License 2.0Apache-2.0

EL2NM: Extremely Low-light Noise Modeling Through Diffusion Iteration

Official implementation of EL2NM: Extremely Low-light Noise Modeling Through Diffusion Iteration in pytorch. [HomePage] [Paper] [Dataset] [Checkpoints]

🎉 News

*(2024.03.28)*: 🎉 Our paper was accepted by IEEE/CVF International Conference on Computer Vision Workshops (CVPRW) 2024~~

📋 Prerequisites

  • Python >=3.6, PyTorch >= 1.6
  • Requirements: opencv-python, pandas, scipy, lpips, yacs4
  • Platforms: Ubuntu 18.04, cuda-11.3
  • Our method can run on the CPU, but we recommend you run it on the GPU

🎬 Quick Start

  1. Clone this project using:
git clone https://github.com/OptimistQAQ/EL2NM.git
  1. Install the dependencies using:
conda env create -f environment.yml
source activate el2nm
  1. Training using:
cd scripts
python train_ddpm_model.py
  1. Testing using:
cd scripts
python test_ddpm_model.py

🏷️ Citation

If you find our code helpful in your research or work please cite our paper.

@InProceedings{Qin_2024_CVPR,
    author    = {Qin, Jiahao and Qin, Pinle and Chai, Rui and Qin, Jia and Jin, Zanxia},
    title     = {EL2NM: Extremely Low-light Noise Modeling Through Diffusion Iteration},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2024},
    pages     = {1085-1094}
}

🤝 Acknowledgments

Part of the code comes from other repo, please abide by the original open source license for the relevant code.