StyleFormer

Official PyTorch implementation for the paper:

StyleFormer:Real-Time Arbitrary Style Transfer via Parametric Style Composition

Overview

This is our overall framework. image

Examples

image

Introduction

This is a release of the code of our paper StyleFormer:Real-Time Arbitrary Style Transfer via Parametric Style Composition, ICCV 2021

Authors: Xiaolei Wu, Zhihao Hu, Lu Sheng*, Dong Xu (*corresponding author)

Update

  • 2021.12.17: Upload PyTorch implementation of StyleFormer.

Dependencies:

  • CUDA 10.1
  • python 3.7.7
  • pytorch 1.3.1

Datasets

MS-COCO

Please download the MS-COCO dataset.

WikiArt

Please download the WikiArt dataset from Kaggle.

Download Trained Models

We provide the trained models of StyleFormer and VGG networks.

Training

cd ./scripts
sh train.sh {GPU_ID}

Test

git clone https://github.com/Wxl-stars/PytorchStyleFormer.git
cd PytorchStyleFormer

CUDA_VISIBLE_DEVICES={GPU_ID} python test.py \
     --trained_network={PRE-TRAINED_STYLEFORMER_MODEL} \
     --path={VGG_PATH} \
     --input_path={CONTENT_PATH} \
     --style_path={STYLE_PATH} \
     --results_path={RESULTS_PATH} \

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{wu2021styleformer,
  title={StyleFormer: Real-Time Arbitrary Style Transfer via Parametric Style Composition},
  author={Wu, Xiaolei and Hu, Zhihao and Sheng, Lu and Xu, Dong},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={14618--14627},
  year={2021}
}

Contact

If you have any questions or suggestions about this paper, feel free to contact:

Xiaolei Wu: wuxiaolei@buaa.edu.cn
Zhihao Hu: huzhihao@buaa.edu.cn