Official PyTorch implementation for the paper:
StyleFormer:Real-Time Arbitrary Style Transfer via Parametric Style Composition
This is our overall framework.
This is a release of the code of our paper StyleFormer:Real-Time Arbitrary Style Transfer via Parametric Style Composition, ICCV 2021
Authors: Xiaolei Wu, Zhihao Hu, Lu Sheng*, Dong Xu (*corresponding author)
- 2021.12.17: Upload PyTorch implementation of StyleFormer.
- CUDA 10.1
- python 3.7.7
- pytorch 1.3.1
Please download the MS-COCO dataset.
Please download the WikiArt dataset from Kaggle.
We provide the trained models of StyleFormer and VGG networks.
- StyleFormer
- google drive.
- BaiduNetdisk (Extraction Code: kc44)
- VGG
- google drive.
- BaiduNetdisk (Extraction Code: n47y)
cd ./scripts
sh train.sh {GPU_ID}
git clone https://github.com/Wxl-stars/PytorchStyleFormer.git
cd PytorchStyleFormer
CUDA_VISIBLE_DEVICES={GPU_ID} python test.py \
--trained_network={PRE-TRAINED_STYLEFORMER_MODEL} \
--path={VGG_PATH} \
--input_path={CONTENT_PATH} \
--style_path={STYLE_PATH} \
--results_path={RESULTS_PATH} \
If you find our work useful in your research, please consider citing:
@inproceedings{wu2021styleformer,
title={StyleFormer: Real-Time Arbitrary Style Transfer via Parametric Style Composition},
author={Wu, Xiaolei and Hu, Zhihao and Sheng, Lu and Xu, Dong},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={14618--14627},
year={2021}
}
If you have any questions or suggestions about this paper, feel free to contact:
Xiaolei Wu: wuxiaolei@buaa.edu.cn
Zhihao Hu: huzhihao@buaa.edu.cn