Building a system that can recognizes words communicated using the American Sign Language (ASL) using a dataset that contains tracked hand and nose positions extracted from video. The goal is to train a set of Hidden Markov Models (HMMs) using part of this dataset to try and identify individual words from test sequences.
The data in the asl_recognizer/data/ directory was derived from
the RWTH-BOSTON-104 Database.
The handpositions (hand_condensed.csv) are pulled directly from
the database boston104.handpositions.rybach-forster-dreuw-2009-09-25.full.xml. The three markers are:
- 0 speaker's left hand
- 1 speaker's right hand
- 2 speaker's nose
- X and Y values of the video frame increase left to right and top to bottom.
Take a look at the sample ASL recognizer video to see how the hand locations are tracked.
The videos are sentences with translations provided in the database.
For purposes of this project, the sentences have been pre-segmented into words
based on slow motion examination of the files.
These segments are provided in the train_words.csv and test_words.csv files
in the form of start and end frames (inclusive).
The videos in the corpus include recordings from three different ASL speakers.
The mappings for the three speakers to video are included in the speaker.csv
file.