/Boundary-aware-Image-Inpainting

The official code for the paper "Boudary-aware Image Inpainting with Multiple Auxiliary Cues"

Primary LanguagePythonMIT LicenseMIT

Intro

The official code for the following paper :

Boundary-aware Image Inpainting with Multiple Auxiliary Cues, NTIRE2022

Download paper here

Prerequisites

  • Python3
  • PyTorch 1.0
  • NVIDIA GPU + CUDA cuDNN

Installation

1.Clone the repository

git clone https://github.com/rain58/Boundary-aware-Image-Inpainting.git  
cd Boundary-aware-Image-Inpainting

2.To create Python environment:

pip install -r requirements.txt

Datasets

RGB Images

We use Places2 and Paris Street-View.Please download the datasets from their official website. Arter downloading, make flist files:

mkdir datasets
python ./scripts/flist.py --path path_to_rgb_train_set --output ./datasets/rgb_train.flist

Depths

Estimate depth image from RGB Images datasets by using Dense Depth. The procedure is as follows.

  1. Fine-tune the pre-trained Dense Depth model by using DIODE. We only use outdoor images of DIODE dataset. Please download from here
  2. Estimate the depth image from RGB Images.
  3. make flist files:
python ./scripts/flist.py --path path_to_depth_train_set --output ./datasets/depth_train.flist

Masks

We use mask datasets provided by Liu et al.. You can download datasets from here
Arter downloading, make flist files:

python ./scripts/flist.py --path path_to_mask_train_set --output ./datasets/mask_train.flist

Getting Started

Training

To train the model, create a config.yaml file similar to the example config file and copy it under your checkpoints directory.

Our model is trained in three stages:

  1. training edge model or download the edge model from EdgeConnect
  2. training depth model
  3. training the inpaint model
    To train the model, change the "model" option number in train.sh.
    For examples, Edge:1, Depth:5, Inpaint:6. Then, run
sh train.sh

Testing

Download the pre-trained models from the following links and copy them under ./checkpoints
our pre-trained model : Paris Street-View, Places

To test our model, change the dataset path in test.sh and run

sh test.sh

Acknowledgement

This work is based on the EdgeConnect. We very appreciate their hard work and great research.