This is the official Tensorflow implementation of our paper:
Attention-aware Multi-stroke Style Transfer, CVPR 2019. [Project] [arXiv]
Yuan Yao, Jianqiang Ren, Xuansong Xie, Weidong Liu, Yong-Jin Liu, Jun Wang
This project provides an arbitrary style transfer method that achieves both faithful style transfer and visual consistency between the content and stylized images. The key idea of the proposed method is to employ self-attention mechanism, multi-scale style swap and a flexible stroke pattern fusion strategy to smoothly and adaptably apply suitable stroke patterns on different regions. In this manner, the synthesized images of our method can be more visually pleasing and generated in one feed-forward pass.
Our model is now available on Modelscope platform for easy experience.
- Python (version 2.7)
- Tensorflow (>=1.4)
- Numpy
- Matplotlib
- MSCOCO dataset is applied for the training of the proposed self-attention autoencoder.
- Pre-trained VGG-19 model.
Make sure there exists a sub-folder named test_result under images folder, then run
$ python test.py --model tf_model/aams.pb \
--content images/content/lenna_cropped.jpg \
--style images/style/candy.jpg \
--inter_weight 1.0
both of the stylized image and the attention map will be generated in test_result.
Our model is trained with tensorflow 1.4.
Download the MSCOCO dataset and filter out images with unsuitable format(grayscale,etc) by running
$ python filter_training_images.py --dataset datasets/COCO_Datasets/val2014
then
$ python train.py --dataset datasets/COCO_Datasets/val2014
If our work is useful for your research, please consider citing:
@inproceedings{yao2019attention,
title={Attention-aware Multi-stroke Style Transfer},
author={Yao, Yuan and Ren, Jianqiang and Xie, Xuansong and Liu, Weidong and Liu, Yong-Jin and Wang, Jun},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2019}
}
© Alibaba, 2019. For academic and non-commercial use only.
We express gratitudes to the style-agnostic style transfer works including Style-swap, WCT and Avatar-Net, as we benefit a lot from both their papers and codes.
If you have any questions or suggestions about this paper, feel free to contact Yuan Yao or Jianqiang Ren.