A tensorflow implementation for Perceptual Losses for Real-Time Style Transfer and Super-Resolution.
This code is based on Tensorflow-Slim and OlavHN/fast-neural-style.
configuration | style | sample |
---|---|---|
wave.yml | ||
cubist.yml | ||
denoised_starry.yml | ||
mosaic.yml | ||
scream.yml | ||
feathers.yml | ||
udnie.yml |
- Python 2.7.x
- Tensorflow(>= 0.11)
And make sure you installed pyyaml:
pip install pyyaml
You can download all the 7 trained models from Baidu Drive.
To generate a sample from the model "wave.ckpt-done", run:
python eval.py --model_file <your path to wave.ckpt-done> --image_file img/test.jpg
Then check out generated/res.jpg.
To train a model from scratch, you should first download VGG16 model from Tensorflow Slim. Extract the file vgg_16.ckpt. Then copy it to the folder pretrained/ :
cd <this repo>
mkdir pretrained
cp <your path to vgg_16.ckpt> pretrained/
Then download the COCO dataset. Please unzip it, and you will have a folder named "train2014" with many raw images in it. Then create a symbol link to it:
cd <this repo>
ln -s <your path to the folder "train2014"> train2014
Train the model of "wave":
python train.py -c conf/wave.yml
(Optional) Use tensorboard:
tensorboard --logdir models/wave/
Checkpoints will be written to "models/wave/".
View the configuration file for details.