Detecting Overfitting of Deep Generators via Latent Recovery

pytorch implementation of "Detecting Overfitting of Deep Generators via Latent Recovery", CVPR, 2019 cvf link

Demo

There is a colab demo here: https://colab.research.google.com/drive/1N6zP4xlPunWOkmakcl0mamfhq946nMLB?usp=sharing

Run the demo by calling bash config_latent_recovery_pggan. First download the networks from this google drive and place them in a folder named 'networks' (or a directory of your choice). Then modify the path to your CelebA-HQ dataset in config_latent_recovery_pggan. (Note: CelebA-HQ must be a folder of images, compatible with datasets.ImageFolder).

To run latent_recovery.py on your own networks, save the network directly (with torch.save(..)), place the network definition file in this folder (for example DCGAN_ryan.py) and provide the network path / name when calling latent_recovery.

Dependencies

  • pytorch
  • scipy
  • numpy

Example Images (see paper)

A common heuristic to detect overfitting is by providing dataset neearest neighbors to generated images. Here, we find the closeset image a generator can produce to a give train or test image, which is more consistent when considering image transformations (see below figure)

Recovery vs NN in dataset

Finally, networks where overfitting is present also exhibit worse visual results when doing recovery.

Recovery with PGGAN, Mescheder et al, GLO (with 256 training images) and CycleGAN (with 256 training)