Implement video generative model
Original Paper: https://arxiv.org/pdf/1511.05440.pdf
Pacman dataset: https://drive.google.com/open?id=0Byf787GZQ7KvV25xMWpWbV9LdUU
Adversarial Video Generation: https://github.com/dyelax/Adversarial_Video_Generation
The VideoGAN training data requires preprocessing. To generate the VideoGAN data:
- Download the Ms_Pacman dataset from https://drive.google.com/open?id=0Byf787GZQ7KvV25xMWpWbV9LdUU
- Run the following commands in
VideoGAN
unzip Ms_Pacman.zip
mkdir train
python process_data.py Ms_Pacman/Train train --num_clips=5000
bash vanilla_gan/download_dataset.sh
cp -r a4-code-v2-updated/emojis emojis
python3 process.py
cp generator_net.pth.tmp generator_net.pth
python inference.py output_example_video.mp4
python test_model.py
https://github.com/soumith/ganhacks
- Generate the entire Pacman board from the generative network
- Evaluate predicted frames using Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Inception Distance
- Change loss function to use log() function
- Write blog post
- CVPR Notes
- Generate graphs for blog post (Mike)