More details about this work can be found in the paper Talking with Your Hands: Scaling Hand Gestures and Recognition with CNNs.
In this paper, 4 different models are used: 2D-SqueezeNet (version1.1), 2D-MobileNetV2; 3D-SqueezeNet (version1.1), 3D-MobileNetV2.
Training and testing the first two models can be found in the directory GeScale_2D and the other two are in GeScale_3D.
You can download SHGD here. The dataset includes two parts: Single gestures and 3-tuple gestures. Every record includes infrared images and depth images.
If you find this work useful or use the code/dataset, please cite as follows:
@inproceedings{kopuklu2019talking,
title={Talking with your hands: Scaling hand gestures and recognition with cnns},
author={Kopuklu, Okan and Rong, Yao and Rigoll, Gerhard},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops},
pages={0--0},
year={2019}
}