/CNN_Implementations

Data and Trained models can be downloaded from https://goo.gl/7PrKD2

Primary LanguageJupyter NotebookGNU General Public License v3.0GPL-3.0

alt tag

Visualization of the 2D latent variable corresponding to a Convolutional Variational Autoencoder during training on MNIST dataset (handwritten digits). The image to the left is the mean of the approximate posterior Q(z|X) and each color represents a class of digits within the dataset. The image to the left shows samples from the decoder (likelihood) P(X|z). The title above shows the iteration number and total loss [Reconstruction + KL] of the model at the point that images below were produced from the model under training. One can observe that by the time the generated outputs (left image) get better, the points on the latent space (posterior) also get into better seperated clusters. Also note that points get closer to each other because the KL part of the total loss is imposing a zero mean gaussian distribution on the latent variable, which is realized on the latent variable as the trainig proceeds.

Tensorflow implementation of various generative models based on convolutional neural networks. Throughout different models i will always keep the same architecture for decoder/encoder/discriminator. This helps comparision of models that only differ by their cost functions.

Use these code with no warranty and please respect the accompanying license.

Generative Adversarial Networks

Jupyter Notebook Python code

Jupyter Notebook Python Code

Python Code

Python Code

Variational Autoencoders

Jupyter Notebook Python Code

Hybrid Models

Jupyter Notebook Python Code

Basic Models

Convolutional Denoising Autoencoders

Jupyter Notebook Python Code