/RDDCNN

A robust deformed CNN for image denoising (CAAI Transactions on Intelligence Technology,2022)

Primary LanguagePython

RDDCNN

A robust deformed convolutional neural network for image denoising(RDDCNN) is proposed by Qi Zhang, Jingyu Xiao, Chunwei Tian*, Jerry Chun-Wei Lin and Shichao Zhang. Also, it is accepted by the CAAI Transactions on Intelligence Technology (Office journal of the Chinese Association for Artificial Intelligence/SCI:IF-7.985)in 2022 and it is implemented by PyTorch. This paper can be obtained at https://ietresearch.onlinelibrary.wiley.com/doi/full/10.1049/cit2.12110.

RDDCNN mainly combines a deformable convolution and a stacked architecture with a dilated convolution to restore high-quality pixels, according to relations of surrounding pixels and obtained structural information in image denoising.

Absract

Due to strong learning ability, convolutional neural networks (CNNs) have been developed in image denoising. However, convolutional operations may change original distributions of noise in corrupted images, which may increase training difficulty in image denoising. Using relations of surrounding pixels can effectively resolve this problem. Inspired by that, we propose a robust deformed denoising CNN (RDDCNN) in this paper. The proposed RDDCNN contains three blocks: a deformable block (DB), an enhanced block (EB) and a residual block (RB). The DB can extract more representative noise features via a deformable learnable kernel and stacked convolutional architecture, according to relations of surrounding pixels. The EB can facilitate contextual interaction through a dilated convolution and a novel combination of convolutional layers, batch normalization (BN) and ReLU, which can enhance the learning ability of the proposed RDDCNN. To address long-term dependency problem, the RB is used to enhance the memory ability of shallow layer on deep layers and construct a clean image. Besides, we implement a blind denoising model. Experimental results demonstrate that our denoising model outperforms popular denoising methods in terms of qualitative and quantitative analysis. Codes can be obtained at https://github.com/hellloxiaotian/RDDCNN.

pipeline

Requirements

Python 3.7

Pytorch 1.1

cuda 10.0

cudnn 7

torchvision

openCV for Python

HDF5 for Python

Dataset

Training sets

The training set of gray noisy images can be downloaded at here.

The training set of real noisy images can be downloaded at here.

Test sets

The test set BSD68 of gray noisy images can be downloaded at here.

The test set Set12 of gray noisy images can be downloaded at here.

The test set CC of real noisy images can be downloaded at here.

Training

For training with gray images with known noise level, run the following training example:

CUDA_VISIBLE_DEVICES=0 python gray/train.py --sigma $SIGMA --mode S --train_data $YOUR_SET_PATH

For training with gray images with unknown noise level, run the following training example:

CUDA_VISIBLE_DEVICES=0 python gray/train.py --sigma $SIGMA --mode B --train_data $YOUR_SET_PATH

For training with real images, run the following training example:

CUDA_VISIBLE_DEVICES=0 python real/train_r.py --train_data $YOUR_SET_PATH

Test

We provide pretrained models on Google Drive for validation.

The model trained with gray noisy images at noise level 15: download

The model trained with gray noisy images at noise level 25: download

The model trained with gray noisy images at noise level 50: download

The model trained with gray noisy images with unknown noise level: download

The model for real noisy images: download

For RDDCNN trained with known noise level images, run the following validation example:

CUDA_VISIBLE_DEVICES=0 python gray/test.py --sigma $SIGMA --mode S --model_dir $YOUR_MODEL_PATH --set_dir $YOUR_SET_PATH

For RDDCNN trained with unknown noise level images, run the following validation example:

CUDA_VISIBLE_DEVICES=0 python gray/test.py --sigma $SIGMA --mode B --model_dir $YOUR_MODEL_PATH --set_dir $YOUR_SET_PATH

For RDDCNN trained with real noise image, run the following validation example:

CUDA_VISIBLE_DEVICES=0 python real/test_r.py --model_dir $YOUR_MODEL_PATH --set_dir $YOUR_SET_PATH

Experimental results

1.Denoising results of different methods on BSD68 for noise level of 25

Ablation

2.Comparisons of deformable convolution and common convolution

ComparisonsOfDeformableConvAndConv

3.PSNR (dB) results of several networks on BSD68 for noise level of 15, 25, and 50

BSD68

4.Average PSNR (dB) results of different methods on Set12 with noise levels of 15, 25 and 50

set12

5.Complexity of different denoising methods

Complexity

6.Running time (s) of different methods for 256×256, 512×512, and 1024×1024

RunningTime

7.Average PSNR (dB) of different denoising methods on CC

CC

Visual results

Denoising results of different methods on one image from BSD68 when noise level 25. (a) Original image (b) Noisy image/20.19 dB (c) BM3D /36.59 dB (d) WNNM /37.22 dB (e) IRCNN /38.17 dB (f) FFDNet /38.41 dB (g) DnCNN /38.45 dB (h) RDDCNN/38.64 dB.

Fig1

Denoising results of different methods on one image from BSD68 when noise level is 50. (a) Original image (b) Noisy image/14.66 dB (c) BM3D /29.87 dB (d) WNNM /30.07 dB (e) IRCNN /30.33 dB (f) DnCNN /30.48 dB (g) FFDNet /30.56 dB (h) RDDCNN/30.67 dB.

Fig2

Denoising results of different methods on one image from Set12 when noise level is 15. (a) Original image (b) Noisy image/24.60 dB (c) BM3D /31.37 dB (d) WNNM /31.62 dB (e) FFDNet /31.81 dB (f) DnCNN /31.83 dB (g) IRCNN /31.84 dB (h) RDDCNN/31.93 dB

Fig3

Cited information is shown as follows.

1. Zhang Q, Xiao J, Tian C, et al. A robust deformed convolutional neural network (CNN) for image denoising[J]. CAAI Transactions on Intelligence Technology, 2022.

2. @article{zhang2022robust,

title={A robust deformed convolutional neural network (CNN) for image denoising},

author={Zhang, Qi and Xiao, Jingyu and Tian, Chunwei and Chun-Wei Lin, Jerry and Zhang, Shichao},

journal={CAAI Transactions on Intelligence Technology},

year={2022},

publisher={Wiley Online Library}

}