guglielmocamporese/learning_invariances_in_speech_recognition
In this work I investigate the speech command task developing and analyzing deep learning models. The state of the art technology uses convolutional neural networks (CNN) because of their intrinsic nature of learning correlated represen- tations as is the speech. In particular I develop different CNNs trained on the Google Speech Command Dataset and tested on different scenarios. A main problem on speech recognition consists in the differences on pronunciations of words among different people: one way of building an invariant model to variability is to augment the dataset perturbing the input. In this work I study two kind of augmentations: the Vocal Tract Length Perturbation (VTLP) and the Synchronous Overlap and Add (SOLA) that locally perturb the input in frequency and time respectively. The models trained on augmented data outperforms in accuracy, precision and recall all the models trained on the normal dataset. Also the design of CNNs has impact on learning invariances: the inception CNN architecture in fact helps on learning features that are invariant to speech variability using different kind of kernel sizes for convolution. Intuitively this is because of the implicit capability of the model on detecting different speech pattern lengths in the audio feature.
Python
Stargazers
- atuxhe
- danielmorozoffColumbia University
- Emvdy
- eonu@hazy
- gheyretJapan
- halsten
- jyqian1997
- lacking1
- leandermeUKE
- Mohamed-Tarek-201997Cairo, Egypt
- Nino-Ha
- pchampioPhD in Anonymizing speech - INRIA
- SStarRoad
- SuperGops7National University of Singapore
- Tammy-Jiang-1
- th0nly
- titusajWashington DC
- vwvolodyaLviv, Ukraine
- ziyang-c19