/DualResidualNetworks

Dual Residual Networks Leveraging the Potential of Paired Operations for Image Restoration

Primary LanguagePythonMIT LicenseMIT

Dual Residual Networks

By Xing Liu1, Masanori Suganuma1,2, Zhun Sun2, Takayuki Okatani1,2

Tohoku University1, RIKEN Center for AIP2

link to the paper

News

i) A summary of experimental settings for training is added.

ii) Some mistakes in ./train/raindrop.py are fixed.

Table of Contents

  1. Abstract

  2. Citation

  3. Numerical Results

  4. Models

  5. Datasets

  6. Test

  7. Train

  8. Visual Results

Abstract

In this paper, we study design of deep neural networks for tasks of image restoration. We propose a novel style of residual connections dubbed “dual residual connection”, which exploits the potential of paired operations, e.g., upand down-sampling or convolution with large- and smallsize kernels. We design a modular block implementing this connection style; it is equipped with two containers to which arbitrary paired operations are inserted. Adopting the “unraveled” view of the residual networks proposed by Veit et al., we point out that a stack of the proposed modular blocks allows the first operation in a block interact with the second operation in any subsequent blocks. Specifying the two operations in each of the stacked blocks, we build a complete network for each individual task of image restoration. We experimentally evaluate the proposed approach on five image restoration tasks using nine datasets. The results show that the proposed networks with properly chosen paired operations outperform previous methods on almost all of the tasks and datasets.

Citation

@inproceedings{DuRN_arxiv,
title={Dual Residual Networks Leveraging the Potential of Paired Operations for Image Restoration},
author={Liu, Xing and Suganuma, Masanori and Sun, Zhun and Okatani, Takayuki},
booktitle={arXiv preprint arXiv:1903.08817},
year={2019},
}

@inproceedings{DuRN_cvpr19,
title={Dual Residual Networks Leveraging the Potential of Paired Operations for Image Restoration},
author={Liu, Xing and Suganuma, Masanori and Sun, Zhun and Okatani, Takayuki},
booktitle={Proc. Conference on Computer Vision and Pattern Recognition},
pages={7007-7016},
year={2019},
}

Numerical results

Please find them in the test/results_confirmed.txt file.

Models

Please find them here.

Datasets

Gaussian noise removal

  • BSD500-gray (used in our paper)
    If you also want the original BSD500, click here.

Real-world noise removal

Motion blur removal

Haze removal

Raindrop removal

Rain-streak removal

Test

Requirements

  • python 3.7
  • pytorch 1.2.0

Instructions

  1. Download and un-zip the models, and put the trainedmodels in the project folder.
  2. Download and put the datasets into the data folder. Please also set the names for a dataset and its sub-folder(s) correctly, according to the current data folder.
  3. Go to the test folder, and run the scripts.

Visual results

Gaussian noise removal

Real-world noise removal

Motion blur removal - 1

Motion blur removal - 2

Some examples for object detection

Haze removal - 1

The images are taken by iphone 6 plus

Haze removal - 2

Haze removal - 3

Compare inside-feature maps with transmission map

Raindrop removal

Rain-streak removal