By Meng Chang, Qi Li, Huajun Feng, Zhihai Xu
This is the official Pytorch implementation of "Spatial-Adaptive Network for Single Image Denoising" [Paper]
(Noting: The source code is a coarse version for reference and the model provided may not be optimal.)
- Python 3.6
- Pytorch 1.1
- CUDA 9.0
The Deformable ConvNets V2 (DCNv2) module in our code adopts chengdazhi's implementation.
You can compile the code according to your machine.
cd ./dcn
python setup.py develop
Please make sure your machine has a GPU, which is required for the DCNv2 module.
- Download the training dataset and use
gen_dataset_*.py
to package them in the h5py format. - Place the h5py file in
/dataset/train/
or set the 'src_path' inoption.py
to your own path. - You can set any training parameters in
option.py
. After that, train the model:
cd $SADNet_ROOT
python train.py
- Download the trained models from Google Drive and place them in
/ckpt/
. - Place the testing dataset in
/dataset/test/
or set the testing path inoption.py
to your own path. - Set the parameters in
option.py
(eg. 'epoch_test', 'gray' and etc.) - test the trained models:
cd $SADNet_ROOT
python test.py
If you find the code helpful in your research or work, please cite the following papers.
@article{chang2020spatial,
title={Spatial-Adaptive Network for Single Image Denoising},
author={Chang, Meng and Li, Qi and Feng, Huajun and Xu, Zhihai},
journal={arXiv preprint arXiv:2001.10291},
year={2020}
}
The DCNv2 module in our code adopts from chengdazhi's implementation.