A deep learning method for creating video sequences
This repository includes a method for creating what I like to call Neural Dream Videos, which are a way of generating new videos with the same temorpal and spatial qualities as a source video. It uses a variational autoencoder and recurrent neural network in conjunction in order to produce the videos. To see example videos and learn more, see my Medium post about it.
Included in the repository is an ipython notebook which contains most of the information needed to make your own videos. The neural network architectures are written in Tensorflow, and you will likely need at least version 0.8.
The VAE is based on the model by Jan Hendrik Metzen.
The RNN is heavily modified from the model by sherjilozair.