Refer to wayaai/SimGAN(Keras&Tensorflow)
Implementation of Apple's Learning from Simulated and Unsupervised Images through Adversarial Training
I used Tensorflow and Keras before, and recently I'm learning pytorch. So I transfered the code of https://github.com/wayaai/SimGAN (Tensorflow+Keras) to pytorch.
The code of https://github.com/wayaai/SimGAN helps me a lot, thanks! Some code in my repository still use his, It contains:
- mpii_gaze_dataset_organize.py
- image_history_buffer.py
My experiment can not reach the result of the Paper Learning from Simulated and Unsupervised Images through Adversarial Training
I have tried lots of hyper parameter and network structure, but the result is still bad. The network structure in my code is deferent from wayyai's code now.
The results of my exiperiments is bellow:
Someone told me some tricks about how to train GAN
Thank him and the people who contribute to https://github.com/soumith/ganhacks.
It helps me a lot. And I learn a lot from it.
You can download the dataset refer to https://github.com/wayaai/SimGAN.
- MPIIGaze Dataset (2.8G) You can see more details about this dataset in here. You should use the code[mpii_gaze_dataset_organize.py] of wayaai to process the dataset.
- UnityEyes Dataset You can use a software to generate image. The company wayaai provide a 50000 images dataset, you can download it in here You can see more details about this dataset in here
I only use 1.2M UnityEyes images(52 pics), and 214k MPIIGaze images(144 pics) like where the paper said.
- python3.5
- pytorch 0.2
I still tuning the parameters and checking if something wrong in my code...