/Image_Denoising_Autoencoder

Image Denoising Autoencoder on MNIST/FashionMNIST using Pytorch and CNN

Primary LanguageJupyter Notebook

Image_Denoising_Autoencoder

Table Of Contents

Dataset:

MNIST dataset used for training a model. MNIST contains 60000 training examples and 10000 testing examples.

Architecture Of Model

Convolutional Neural Network (CNN) used for creating a model. This model contains 6 layers in which 3 are of encoder and 3 of decoder. For encoder used Conv2d filter and it takes parameter as Con2d(input_channels,output_channels,kernal_size,stride,padding) For decoder used ConvTranspose2d filter and it takes parameters as ConvTranspose2d(input_channels,output_channels,kernel_size,stride,padding,output_padding)

Model Consists of following sequence of Layers:

Layer 1: Conv2d(1,16,3,stride=2,padding=1)

Layer 2: Conv2d(16,32,3,stride=2,padding=1)

Layer 3: Conv2d(32,64,5)

Layer 4: ConvTranspose2d(64,32,5)

Layer 5: ConvTranspose2d(32,16,3,stride=2,padding=1,output_padding=1)

Layer 6: ConvTranspose2d(16,1,3,stride=2,padding=1,output_padding=1)

Hyperparameters

Batch Size = 100

Learning rate = 0.001

MSE Loss

ADAM Optimizer

Activation Functions

ReLU and Sigmoid activation Functions are used to bring non-linearity in model.

Loss Function and Optimization:

Mean Square Loss (MSE) function is used to calculate loss. ADAM Optimizer is used for optimization of model with learning rate 0.001

MNIST:

image

FASHIONMNIST:

image

Noise Added

Random noise added via torch.randn() such that we can change noise by changing noise factor. We can also add Gaussin noise,salt noise and many more types of noises.

OUTPUT

Original Images:

image

Noisy Images:

image

Reconstructed Images:

image

image

image

image