/MCCNet

Primary LanguagePython

Arbitrary Video Style Transfer via Multi-Channel Correlation

Yingying Deng, Fan Tang, Weiming Dong, Haibin Huang, Chongyang Ma, Changsheng Xu

Results presentation

Visual comparisons of video style transfer results. The first row shows the video frame stylized results. The second row shows the heat maps which are used to visualize the differences between two adjacent video frame.

Framework

Overall structure of MCCNet.

Experiment

Requirements

  • python 3.6
  • pytorch 1.4.0
  • PIL, numpy, scipy
  • tqdm

Testing

Pretrained models: vgg-model, [decoder], [MCC_module](see above)
Please download them and put them into the floder ./experiments/

python test_video.py  --content_dir input/content/ --style_dir input/style/    --output out

Training

Traing set is WikiArt collected from WIKIART

Testing set is COCO2014

python train.py --style_dir ../../datasets/Images --content_dir ../../datasets/train2014 --save_dir models/ --batch_size 4

Reference

If you use our work in your research, please cite us using the following BibTeX entry ~ Thank you ^ . ^. Paper Link [pdf](coming soon)

@inproceedings{deng:2020:arbitrary,
  title={Arbitrary Video Style Transfer via Multi-Channel Correlation},
  author={Deng, Yingying and Tang, Fan and Dong, Weiming and Huang, haibin and Ma chongyang and Xu, Changsheng},
  booktitle={AAAI},
  year={2021},
 
}