/Pix2Pix-eager

Tensorflow eager implementation of Pix2Pix (Image-to-image translation with conditional adversarial networks)

Primary LanguagePython

Pix2Pix

Reference

Train

Test

run python test.py to load weights from weights/ and generate 5 images.

Network Architecture

  • The Generator is a U-NET and defined in PixGenerator.py. The output is tanh activated.
  • The Discriminator is a PatchGAN network with size 30*30.

Data

  • The dataset use random_jitter and Nomalized to [-1,1]
  • The process in written in data_preprocess.py

Hyper-parameter

  • The DropOut is used in training and testing, according to the paper. There is no random vector z as input like original GAN. The random of input is represented by dropout.
  • In training, BATCH_SIZE = 1 obtains better results. However, if the generator is a naive 'encoder-decocer' network, you should use base_size > 1. The U-NET used here can prevent the activations of bottle-neck layer to become 0.
  • In the original paper, the author called 'instance batch normalization'.

Generated image

  • Epoch 5 avatar
  • Epoch 30 avatar
  • Epoch 70 avatar
  • Epoch 120 avatar
  • Epoch 150 avatar