This code helps to undertand the concept of Autoencoders. The autoencoder is trained on my facial dataset and it learns encoding of 1024 units from an input of 7500 pixels. This is a two step procedure.
- Encoder - which learns embedding from the input dimensions.
- Decoder - which recreates the image from the embedding created by the encoder.
- Simple Network
- Deep Network
- Convolutional Network
You can install Conda for python which resolves all the dependencies for machine learning.
pip install requirements.txt
An autoencoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction. Recently, the autoencoder concept has become more widely used for learning generative models of data.
Creating our own dataset
- Network Used- Simple Network, Deep Network, Convolutional Network
- Technique - Autoencoders
If you face any problem, kindly raise an issue
- First run
LoadData.py
which will load the images from folder1
(you can change the name) and store it into a pickle file. - Now, run
FaceCoder.py
which will train a simple, deep and a convolutional autoencoder and store it in h5 file. - Now you need to have the data, run
FaceApp.py
which will use dlib library to get your face, encodes it and then decodes it to display the image. - For altering the model, check
FaceCoder.py
. - For tensorboard visualization, go to the specific log directory and run this command
tensorboard --logdir=.
You can go tolocalhost:6006
for visualizing your loss function.