Work in progress, but not too shabby
Detect mood in video. This is a reconstruction of cool CCTV example that I saw at a Spark + AI talk by Thunder Shiviah. You can also have a look at my minutes of the talk.
The example is in the notebook proto-vgg16-nn.ipynb (most stable). There is also a version that uses LSTM to classify sequences of video frames. I actually created that one first, but found that it was overkill for the example.
Training:
- Record a set of "relaxed" and "excited" videos on Mac Book Pro web cam.
- Extract features using pre-trained feature extractor on individual frames.
- Train a model that can tell the difference between relaxed and excited, using Logististic Regression or LSTM. Probably the latter.
Inference:
- Continuously capture video from Mac Book Pro web cam.
- Print either "safe" or "danger"
Project dependencies:
Python 3.6
pip install keras
pip install tensorflow
pip install numpy
pip install Pillow
pip install opencv-python
https://www.learnopencv.com/read-write-and-display-a-video-using-opencv-cpp-python/ https://medium.com/@franky07724_57962/using-keras-pre-trained-models-for-feature-extraction-in-image-clustering-a142c6cdf5b1
(didn't work) https://becominghuman.ai/extract-a-feature-vector-for-any-image-with-pytorch-9717561d1d4c