/WCT-based-style-transfer

This is a implementation of WCT based style transfer

Primary LanguagePython

WCT-based-style-transfer

This is a tensorflow implementation of WCT based style transfer

WCT based style transfer was proposed by Universal Style Transfer via Feature Transforms. It's core idea is to train a decoder to reconstruct general image from features which are generated by a pretrained VGG19. When testing, we use a WCT layer to blend the features of content and style image. Then the trained decoder decodes the blended features to generate stylized image. In the original paper, they trained 5 decoders for layer reluX(x=1,2,3,4,5) separately, and use them all to carry out multi-level stylization.

image

In this repository, we only trained 4 decoders for layer relu1-4.

Samples





Test

1 download the pre-trained vgg19 weights VGG19
2 download the 4 trained decoders weights

mkdir models

and put the trained decoders weights to models folder
Then run :

python test_all_layer.py --content_path /path/to/content_img --style_path /path/to/style_img  --output_path /path/to/result_img --pretrained_vgg path/to/vgg19 --alpha 1

Train

1 download the content images from coco
2 make tfrecord file

mkdir tfrecords
python gen_tfrecords.py

3 we need to train decoders for layer relu1-4 seprartely. For example, if we want to train decoder for relu4 , wo can run as :
python train.py --target_layer relu4 --pretrained_path path/to/vgg19 --max_iterator 20000 --checkpoint_path path/to/save_checkpoint --tfrecord_path path/to/tfrecord  --batch_size 8

Reference

1、https://github.com/eridgd/WCT-TF

2、https://github.com/machrisaa/tensorflow-vgg