/coupe.DMENet

[CVPR 2019] Official TensorFlow Implementation for "Deep Defocus Map Estimation using Domain Adaptation"

Primary LanguagePythonGNU Affero General Public License v3.0AGPL-3.0

DMENet: Deep Defocus Map Estimation Network

License CC BY-NC License CC BY-NC

This repository contains the official TensorFlow implementation of the following paper:

Deep Defocus Map Estimation using Domain Adaptation
Junyong Lee, Sungkil Lee, Sunghyun Cho and Seungyong Lee, CVPR 2019

Getting Started

Prerequisites

Tested environment

Ubuntu Python 3.6 TensorFlow 1.15.0 TensorLayer 1.11.1 CUDA 10.0.130 CUDNN 7.6.

  1. Setup environment

    $ git clone https://github.com/codeslake/DMENet.git
    $ cd DMENet
    
    # for CUDA10.0
    $ conda create -y --name DMENet python=3.6 && conda activate DMENet
    $ sh install_CUDA10.0.sh
    
    # for CUDA11.1 (the name of conda environment matters)
    $ conda create -y --name DMENet_CUDA11 python=3.6 && conda activate DMENet_CUDA11
    $ sh install_CUDA11.1.sh
  2. Download and unzip datasets(option 1, option 2) under [DATASET_ROOT].

    ├── [DATASET_ROOT]
    │   ├── train
    │   │   ├── SYNDOF
    │   │   ├── CUHK
    │   │   ├── Flickr
    │   ├── test
    │   │   ├── CUHK
    │   │   ├── RTF
    │   │   ├── SYNDOF
    

    Note:

    • [DATASET_ROOT] is currently set to ./datasets/. It can be specified by modifying config.data_offset in ./config.py.
  3. Download pretrained weights of DMENet and unzip it as in [LOG_ROOT]/DMENet_BDCS/checkpoint/DMENet_BDCS.npz ([LOG_ROOT] is currently set to ./logs/).

  4. Download pretrained VGG19 weights and unzip as in pretrained/vgg19.npy (for training only).

Logs

  • Training and testing logs will be saved under [LOG_ROOT]/[mode]/:

    ├── [LOG_ROOT]
    │   ├── [mode]
    │   │   ├── checkpoint      # model checkpoint
    │   │   ├── log             # scalar/image log for tensorboard
    │   │   ├── sample          # sample images of training
    │   │   ├── result          # resulting images of evaluation
    

    Note:

    • [LOG_ROOT] is currently set to ./logs/. It can be specified by modifying config.root_offset in ./config.py.

Testing final model of CVPR 2019

Please note that due to the server issue, the checkpoint used for the paper is lost.
The provided checkpoint is the new checkpoint that shows the closest evaluation results as in the paper.

Check out updated performance with the new checkpoint.

  • Test the final model by:

    python main.py --mode DMENet_BDCS --test_set CUHK

    Note:

    • Testing results will be saved in [LOG_ROOT]/[mode]/result/[test_set]/:

      ...
      ├── [test_set]
      │   ├── image                     # input defocused images
      │   ├── defocus_map               # defocus images (network's direct output in range [0, 1])
      │   ├── defocus_map_min_max_norm  # min-max normalized defocus images in range [0, 1] for visualization
      │   ├── sigma_map_7_norm          # sigma maps containing normalized standard deviations (in range [0, 1]) for a Gaussian kernel. For the actual standard deviation value, one should multiply 7 to this map.
      
    • Quantitative results are computed from matlab. (e.g., evaluation on the RTF dataset).

    • Options
      • --mode: The name of a model to test. The logging folder named with the [mode] will be created as [LOG_ROOT]/[mode]/. Default: DMENet_BDCS
      • --test_set: The name of a dataset to evaluate. CUHK | RTF0 | RTF1 | RTF1_6 | random. Default: CUHK
        • The folder structure can be modified in the function get_eval_path(..) in ./config.py.
        • random is for testing models with any images, which should be placed as [DATASET_ROOT]/random/*.[jpg|png].
  • Check out the evaluation code for the RTF dataset, and the deconvolution code.

Training & testing the network

  • Train the network by:

    python main.py --is_train --mode [mode]

    Note:

    • If you train DMENet with newly generated SYNDOF dataset from this repo, comment this line and uncomment this line before the training.
  • Test the network by:

    python main.py --mode [mode] --test_set [test_set]
    • arguments
      • --mode: The name of a model to train. The logging folder named with the [mode] will be created as [LOG_ROOT]/[mode]/. Default: DMENet_BDCS
      • --is_pretrain: Pretrain the network with the MSE loss (True | False). Default: False
      • --delete_log: Deletes [LOG_ROOT]/[mode]/* before training begins (True | False). Default: False

Citation

If you find this code useful, please consider citing:

@InProceedings{Lee_2019_CVPR,
    author = {Lee, Junyong and Lee, Sungkil and Cho, Sunghyun and Lee, Seungyong},
    title = {Deep Defocus Map Estimation Using Domain Adaptation},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2019}
}

Contact

Open an issue for any inquiries. You may also have contact with junyonglee@postech.ac.kr

Related Links

  • CVPR 2021: Iterative Filter Adaptive Network for Single Image Defocus Deblurring [paper][code]
  • ICCV 2021: Single Image Defocus Deblurring Using Kernel-Sharing Parallel Atrous Convolutions [paper][code]

Resources

All material related to our paper is available via the following links:

Link
Paper PDF
Supplementary Files
Checkpoint Files
Datasets(option 1, option 2)
SYNDOF Generation Repo

License

This software is being made available under the terms in the LICENSE file.

Any exemptions to these terms require a license from the Pohang University of Science and Technology.

About Coupe Project

Project ‘COUPE’ aims to develop software that evaluates and improves the quality of images and videos based on big visual data. To achieve the goal, we extract sharpness, color, composition features from images and develop technologies for restoring and improving by using them. In addition, personalization technology through user reference analysis is under study.

Please check out other Coupe repositories in our Posgraph github organization.

Useful Links