Link to download the pretrained models.
- Python 3.6
- PyTorch 1.5.1
- pytorch-msssim 0.2.0
- ptflops 0.6.3
- tqdm 4.48.2
- scikit-image 0.17.2
- yaml 0.2.5
- MATLAB (to create testing datasets)
For training, we used DIV2K dataset. You need to download the dataset for training the model and put the high-resolution image folders in the './Dataset' folder. You can modify the train_files.txt
and val_files.txt
to load only part of the dataset.
Default parameters used in the paper are set in the config.yaml
file:
patch size: 64
batch size: 16
learning rate: 1.e-4
weight decay: 1.e-5
scheduler gamma: 0.5
scheduler step: 3
epochs: 21
Additionally, you can choose the device, the number of workers of the data loader, and enable multiple GPU use.
To train the model use the following command:
python main_train.py
Place the pretrained models in the './Pretrained' folder. Modify the config.yaml
file according to the model you want to use: model channels: 3
for the color model and model channels: 1
for the grayscale model.
Test datasets need to be prepared using the MATLAB codes in './Datasets' folder according to the desired noise level. We test the RDUNet model we use the Set12, CBSD68, Kodak24, and Urban100 datasets.
To test the model use the following command:
python main_test.py
Results reported in the paper.
If you have any question about the code or paper, please contact aneesahamed@ieee.org