/UpgradGestureRecognitonDL

Imagine you are working as a data scientist at a home electronics company which manufactures state of the art smart televisions. You want to develop a cool feature in the smart-TV that can recognise five different gestures performed by the user

Primary LanguageJupyter Notebook

Gesture-Recognition

Project Description

Imagine you are working as a data scientist at a home electronics company which manufactures state of the art smart televisions. You want to develop a cool feature in the smart-TV that can recognise five different gestures performed by the user which will help users control the TV without using a remote.

The gestures are continuously monitored by the webcam mounted on the TV. Each video is a sequence of 30 frames (or images). Each gesture corresponds to a specific command:

  • Thumbs up: Increase the volume
  • Thumbs down: Decrease the volume
  • Left swipe: 'Jump' backwards 10 seconds
  • Right swipe: 'Jump' forward 10 seconds
  • Stop: Pause the movie

Understanding the Dataset

The training data consists of a few hundred videos categorised into one of the five classes. Each video (typically 2-3 seconds long) is divided into a sequence of 30 frames(images). These videos have been recorded by various people performing one of the five gestures in front of a webcam - similar to what the smart TV will use.

The data is in a zip file. The zip file contains a 'train' and a 'val' folder with two CSV files for the two folders. These folders are in turn divided into subfolders where each subfolder represents a video of a particular gesture. Each subfolder, i.e. a video, contains 30 frames (or images). Note that all images in a particular video subfolder have the same dimensions but different videos may have different dimensions. Specifically, videos have two types of dimensions - either 360x360 or 120x160 (depending on the webcam used to record the videos). Hence, you will need to do some pre-processing to standardise the videos.

Each row of the CSV file represents one video and contains three main pieces of information - the name of the subfolder containing the 30 images of the video, the name of the gesture and the numeric label (between 0-4) of the video.

Your task is to train a model on the 'train' folder which performs well on the 'val' folder as well (as usually done in ML projects). We have withheld the test folder for evaluation purposes - your final model's performance will be tested on the 'test' set.

To get started with the model building process, you first need to get the data on your storage.

## Observations

Experiment | Model Config | Categorical Accuracy | Validation Accuracy
-----:|:-------------------------: | :---------------: | :---------------
1 | Conv3D (3 layers) | 96.68% | 57.00%
2 | Conv2D + TimeDistributed + GRU | 81.60% | 31.00%
3 | Conv2D + TimeDistributed + GlobalAveragePooling2D | 75.72% | 39.00%
4 | Conv3D + LSTM | 96.53% | 54.00%
5 | Conv3D (4 layers) | 97.59% | 53.00%
6 | Transfer Learning (MobileNet) + TimeDistributed + LSTM + GRU |  99.91% | 98.53%