/StyTR-2

StyTr2 : Image Style Transfer with Transformers

Primary LanguagePython

StyTr^2 : Image Style Transfer with Transformers(CVPR2022)

Authors: Yingying Deng, Fan Tang, XingjiaPan, Weiming Dong, Chongyang Ma, Changsheng Xu

This paper is proposed to achieve unbiased image style transfer based on the transformer model. We can promote the stylization effect compared with state-of-the-art methods. This repository is the official implementation of SyTr^2 : Image Style Transfer with Transformers.

Results presentation

Compared with some state-of-the-art algorithms, our method has a strong ability to avoid content leakage and has better feature representation ability.

Framework

The overall pipeline of our StyTr^2 framework. We split the content and style images into patches, and use a linear projection to obtain image sequences. Then the content sequences added with CAPE are fed into the content transformer encoder, while the style sequences are fed into the style transformer encoder. Following the two transformer encoders, a multi-layer transformer decoder is adopted to stylize the content sequences according to the style sequences. Finally, we use a progressive upsampling decoder to obtain the stylized images with high-resolution.

Experiment

Requirements

  • python 3.6
  • pytorch 1.4.0
  • PIL, numpy, scipy
  • tqdm

Testing

Pretrained models: vgg-model, vit_embedding, decoder, Transformer_module
Please download them and put them into the floder ./experiments/

python test.py  --content_dir input/content/ --style_dir input/style/    --output out

Training

Style dataset is WikiArt collected from WIKIART

content dataset is COCO2014

python train.py --style_dir ../../datasets/Images/ --content_dir ../../datasets/train2014 --save_dir models/ --batch_size 8

Reference

If you find our work useful in your research, please cite our paper using the following BibTeX entry ~ Thank you ^ . ^. Paper Link pdf

@inproceedings{deng2021stytr2,
      title={StyTr^2: Image Style Transfer with Transformers}, 
      author={Yingying Deng and Fan Tang and Weiming Dong and Chongyang Ma and Xingjia Pan and Lei Wang and Changsheng Xu},
      booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      year={2022},
}