- Dataset
- Architecture Of Model
- Hyperparamters
- Activation Function
- Loss FUnction and Optimization
- Noise Added
- Output
MNIST dataset used for training a model. MNIST contains 60000 training examples and 10000 testing examples.
Convolutional Neural Network (CNN) used for creating a model. This model contains 6 layers in which 3 are of encoder and 3 of decoder. For encoder used Conv2d filter and it takes parameter as Con2d(input_channels,output_channels,kernal_size,stride,padding) For decoder used ConvTranspose2d filter and it takes parameters as ConvTranspose2d(input_channels,output_channels,kernel_size,stride,padding,output_padding)
Model Consists of following sequence of Layers:
Layer 1: Conv2d(1,16,3,stride=2,padding=1)
Layer 2: Conv2d(16,32,3,stride=2,padding=1)
Layer 3: Conv2d(32,64,5)
Layer 4: ConvTranspose2d(64,32,5)
Layer 5: ConvTranspose2d(32,16,3,stride=2,padding=1,output_padding=1)
Layer 6: ConvTranspose2d(16,1,3,stride=2,padding=1,output_padding=1)
Batch Size = 100
Learning rate = 0.001
MSE Loss
ADAM Optimizer
ReLU and Sigmoid activation Functions are used to bring non-linearity in model.
Mean Square Loss (MSE) function is used to calculate loss. ADAM Optimizer is used for optimization of model with learning rate 0.001
MNIST:
FASHIONMNIST:
Random noise added via torch.randn() such that we can change noise by changing noise factor. We can also add Gaussin noise,salt noise and many more types of noises.
Original Images:
Noisy Images:
Reconstructed Images: