This repository's code has been merged with AdaIN-style. This repository will contain only experiments and minor improvements. All major updates will be merged with AdaIN-style.
Presently, I do style transfer on a frame by frame basis. I store style features so that style image is processed only once. This creates a speedup of about 1.2-1.4x.
I have yet to include optical flow to optimize it (like in artistic-videos repository). This will require retraining to support temporal consistency. But, this is way faster in comparison to artistic-videos as it peforms style transfer using Adaptive Instance Normalization.
Dependencies of AdaIN-style
Optionally:
- CUDA and cuDNN
- cunn
- torch.cudnn
For a 10s video with 480p resolution it takes about 2 minutes on a Titan X Maxwell GPU (12GB).
bash models/download_models.sh
- Add audio support
- Retrain to incorporate motion information
bash styVid.sh input.mp4 style-dir-path
This generates 1 mp4 for each image present in style-dir-path
. Next follow the instructions given by prompt.
To, change other parameters like alpha etc. edit test.lua
.
bash styVid.sh input/videos/cutBunny.mp4 input/styleexample
This will first create two folder namely videos
and videoprocessing
. Then it will generate three mp4 files namely cutBunny-stylized-mondrian.mp4
, cutBunny-stylized-woman_with_hat_matisse.mp4
and cutBunny-fix.mp4
in videos
folder. I have included the files in examples/Result
folder for reference.
The individual frames and output would be present in videoprocessing
folder.
An example video with some results can be seen here on youtube.
If you find this code useful for your research, please cite the paper:
@article{huang2017adain,
title={Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization},
author={Huang, Xun and Belongie, Serge},
journal={arXiv preprint arXiv:1703.06868},
year={2017}
}