/Deep-Video-Inpainting

Official pytorch implementation for "Deep Video Inpainting" (CVPR 2019)

Primary LanguagePython

Deep_Video_Inpainting

Official pytorch implementation for "Deep Video Inpainting" (CVPR 2019)
Dahun Kim*, Sanghyun Woo*, Joon-Young Lee, and In So Kweon. (*: equal contribution)
[Paper] [Project page] [Video results]

If you are also interested in video caption removal, please check [Paper] [Project page]

Disclaimer

This is tested under Python 3.6, PyTorch 0.4.0 (dependencies can be compiled on this version).

Testing

  1. Download the trained weight 'save_agg_rec_512.pth' and place it in "./results/vinet_agg_rec/"
    Google drive: [weight-512x512] [weight-256x256]

  2. Compile Resample2d, Correlation dependencies.

bash ./install.sh
  1. Run the demo (the results are saved in "./results/vinet_agg_rec/davis_512/").
python demo_vi.py
  1. Optional: Run the video retargeting (Section 4.5)
python demo_retarget.py

Citation

If you find the codes useful in your research, please cite:

@inproceedings{kim2019deep,
  title={Deep Video Inpainting},
  author={Kim, Dahun and Woo, Sanghyun and Lee, Joon-Young and So Kweon, In},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={5792--5801},
  year={2019}