Image-Inpainting-using-Partial-Convolutional-Layers

Table of contents

Introduction

  • Implemented the paper https://arxiv.org/pdf/1804.07723 by Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, Bryan Catanzaro
  • Implementation has been done in Keras.
  • Achieved PSNR(Peak-Signal-to-Noise-Ratio) of 15.76 on validation images.

Details

Setup

  • From training on the dataset provided in the Details section, directly run inpainting-notebook
  • For training on your own dataset, use the same architecture with a dataset of your own.

Implementation Details

  • The first task required creating masks for the images. Used OpenCV to make a random mask generator.
    Here are a few results of the random masks:

*

Architecture Used : UNET

Partial Convolutional Layer:

  • As cited in the paper
  • Let W be the convolution filter weights for the convolution filter and b its the corresponding bias. X are the feature values (pixels values) for the current convolution (sliding) window and M is the corresponding binary mask. The partial convolution at every location, is expressed as:

Visualizing Results :

  • Left to right : Maksed Image, Predicted Image , Original Image
  • Training takes a lot of time and an average epoch through the entire dataset takes about 3 hrs.