/IllustrationGAN

A simple, clean TensorFlow implementation of Generative Adversarial Networks with a focus on modeling illustrations.

Primary LanguagePython

IllustrationGAN

A simple, clean TensorFlow implementation of Generative Adversarial Networks with a focus on modeling illustrations.

Generated Images

These images were generated by the model after being trained on a custom dataset of about 20,000 anime faces that were automatically cropped from illustrations using a face detector. Generated Images

Checking for Overfitting

It is theoretically possible for the generator network to memorize training set images rather than actually generalizing and learning to produce novel images of its own. To check for this, I randomly generate images and display the "closest" images in the training set according to mean squared error. The top row is randomly generated images, the columns are the closest 5 images in the training set.

Overfitting Check

It is clear that the generator does not merely learn to copy training set images, but rather generalizes and is able to produce its own unique images.

How it Works

Generative Adversarial Networks consist of two neural networks: a discriminator and a generator. The discriminator receives both real images from the training set and generated images produced by the generator. The discriminator outputs the probability that an image is real, so it is trained to output high values for the real images and low values for the generated ones. The generator is trained to produce images that the discriminator thinks are real. Both the discriminator and generator are trainined simultaneously so that they compete against each other. As a result of this, the generator learns to produce more and more realistic images as it trains.

Model Architecture

The model is based on DCGANs, but with a few important differences:

  1. No strided convolutions. The generator uses bilinear upsampling to upscale a feature blob by a factor of 2, followed by a stride-1 convolution layer. The discriminator uses a stride-1 convolution followed by 2x2 max pooling.

  2. Minibatch discrimination. See Improved Techniques for Training GANs for more details.

  3. More fully connected layers in both the generator and discriminator. In DCGANs, both networks have only one fully connected layer.

  4. A novel regularization term applied to the generator network. Normally, increasing the number of fully connected layers in the generator beyond one triggers one of the most common failure modes when training GANs: the generator "collapses" the z-space and produces only a very small number of unique examples. In other words, very different z vectors will produce nearly the same generated image. To fix this, I add a small auxiliary z-predictor network that takes as input the output of the last fully connected layer in the generator, and predicts the value of z. In other words, it attempts to learn the inverse of whatever function the generator fully connected layers learn. The z-predictor network and generator are trained together to predict the value of z. This forces the generator fully connected layers to only learn those transformations that preserve information about z. The result is that the aformentioned collapse no longer occurs, and the generator is able to leverage the power of the additional fully connected layers.

Training the Model

Dependencies: TensorFlow, PrettyTensor, numpy, matplotlib

The custom dataset I used is too large to add to a Github repository; I am currently finding a suitable way to distribute it. Instructions for training the model will be in this readme after I make the dataset available.