/VideoGAN

Implement video generative model

Primary LanguageJupyter NotebookMIT LicenseMIT

VideoGAN

Implement video generative model

Original Paper: https://arxiv.org/pdf/1511.05440.pdf

Pacman dataset: https://drive.google.com/open?id=0Byf787GZQ7KvV25xMWpWbV9LdUU

Adversarial Video Generation: https://github.com/dyelax/Adversarial_Video_Generation

Generate VideoGAN Data

The VideoGAN training data requires preprocessing. To generate the VideoGAN data:

  1. Download the Ms_Pacman dataset from https://drive.google.com/open?id=0Byf787GZQ7KvV25xMWpWbV9LdUU
  2. Run the following commands in VideoGAN
unzip Ms_Pacman.zip
mkdir train
python process_data.py Ms_Pacman/Train train --num_clips=5000

Instructions to run Vanilla GAN

bash vanilla_gan/download_dataset.sh
cp -r a4-code-v2-updated/emojis emojis
python3 process.py

Instructions to generate the entire Pacman board

cp generator_net.pth.tmp generator_net.pth
python inference.py output_example_video.mp4

Instructions to test a saved model. It loads generator_net.pth.tmp

python test_model.py

Tips and tricks to train GANs

https://github.com/soumith/ganhacks

TODO

  1. Generate the entire Pacman board from the generative network
  2. Evaluate predicted frames using Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Inception Distance
  3. Change loss function to use log() function
  4. Write blog post
  5. CVPR Notes
  6. Generate graphs for blog post (Mike)