Submission for IIITD's ALIVE project round 2.
The model uses a combination of 2 datasets to improve the accuracy. These datasets are -
- https://www.kaggle.com/prasadvpatil/mrl-dataset (2000 open eye images and 2000 closed eye images)
- https://www.kaggle.com/serenaraju/yawn-eye-dataset-new (Dataset used partially, i.e, only the closed and open eye images were used. 726 open eye images and 726 closed eye images)
The total number of images used -
- 2726 open eye images
- 2726 closed eye images
- Clone this repository.
git clone https://github.com/Saransh-cpp/IIITD_ALIVE_DSM
- Create a virtual environment
cd IIITD_ALIVE_DSM
python -m venv .env
- Activate the environment
.env/Scripts/activate
- Install the requirements
pip install -r requirements.txt
- Run train.ipynb to re-train the model (the model is already trained).
- To start the live video feed for drowsiness detection, run -
python drowsiness_detector.py
Note: You might need to edit thhe number passed in VideoCapture to your webcam's number (0, 1, 2, 3, ....) -
IIITD_ALIVE_DSM/drowsiness_detector.py
Line 26 in ced24be
- To stop the live feed press
q.
- The training step uses
Transfer LearningwithVGG19. - Last layer of
VGG19has been removed and aFlattenlayer with a new output layer has been added. - The model uses
adamoptimizer andcategorical_crossentropyloss function.
The train notebook is very well documented.
2022-01-11.23-17-22.mp4
The repository contains a CI built using GitHub Action which can be scaled up to include testing of the scripts.
The model has been trained and converted into tflite format in the train notebook. The trained tflite model is also present here.
Drowsiness detector to run on live video feed has been created in drowsiness_detector.py.