- Detects 3 words in sign language "I", "love", "code"
- Uses Mediapipe Holistic to detect face, pose and hands
- Uses Tensorflow Keras API to build LSTM model using sequential layers
- Able to customize actions to detect by changing variables in Step 4
- Followed tutorial at: https://www.youtube.com/watch?v=doDUihpj6ro
- Original code at: https://github.com/nicknochnack/ActionDetectionforSignLanguage