MarioRuggieri/Emotion-Recognition-from-Speech

Regarding the usage

PrasadParsodkar opened this issue · 1 comments

Hello Sir,
Greetings!!
I am not clear about the usage of your code of "Emotion recognition from speech". I am facing problem for interpretation of the exact meaning of the arguments to be passed for -l and -e and what values we need to pass.
So, please help us out we would be very grateful to make it happen using your repository.
Hoping for the positive response
Thanking you
Prasad Parsodkar

The application reads samples and targets from a dataset (the supported datasets are specified in the README file but you can add a new dataset support editing the dataset.py file).

In the reading phase, the application translates audio files into raw numerical vectors and links them to a target. Using -l option (no arguments needed) you force your application to load data and save it into a .p file. If you don't specify this option, data is read directly from the .p file avoiding the long reading phase.

In the feature extraction phase, the application extracts features from the raw numerical samples. Using -e option you force the application to extract features and save them into a .p file. If you skip this option, features are not generated because are read directly from files.

So the first time you run the application, -l and -e options are mandatory because you need to extract data and features. From the second run, data and features are stored into a file so -e and -l are useless unless you change the feature extraction method and/or the dataset.

Thank you for using Emotion-Recognition-from-speech and sorry i'm late.

Regards,
Mario Ruggieri