Aim is to convert the hand-signs to text, to help with the accesibility for the needful. Create a modular code that can be trained on any sign language and that can have a multiple class, so whole things is reuseable and only we need to provide the dataset in correct format.
To go through the presentation open ML_and_Mine.pptx
- For the medium article
First run the vid2frame.ipynb
notebook. This will download the dataset as well as make the dataset ready for the model. Finally run the Final_model_file.ipynb
, we have multiplexed two models here so change the variable model_type
to run the desired model.
Also sufficient comments are in the code to understand the notebook better.