/WassersteinGAN-TensorFlow

WassersteinGAN-TensorFlow

Primary LanguagePythonMIT LicenseMIT

WassersteinGAN

TensorFlow implementation of Wasserstein GAN

Other implementations:

DCGAN model/ops are a modified version of Taehoon Kim's implementation @carpedm20.


Usage

Download dataset:

python download.py celebA

To train:

python main.py --dataset celebA --is_train --is_crop

OR modify and use run.py:

python run.py

Note: a NumPy array of the input data is created by default. This is to avoid batch by batch IO. You can turn this option off if the available memory is too small on your system or if your dataset is too large.

python main.py --dataset celebA --is_train --is_crop --preload_data False

Results

My experiments show that training with a Wasserstein loss is much more stable than the conventional heuristic loss . However, the generated image quality is much lower. See examples below.

The architectures of the generator and discriminator (critic) is the same in both experiments. For cross-checking purposes here are the things that are different:

  1. Batch Normalization parameters: epsilon = 1e-3 (1e-5), momentum = 0.99 (0.9) for Wasserstein (Heuristic).
  2. RMSProp (Adam) for Wasserstein (Heuristic).

Wasserstein Loss

Sample images after 50 epochs:

Heuristic Loss

Sample images after 25 epochs using loss :