Improve accuracy for emotion recognition module
Closed this issue · 4 comments
The work done in the repo, is outstanding, and the reults are pretty good as well,, it's accuracy is much more than what you have mentioned in README,, I guess it's arround 78% or more,, but am really interested to know how I can contribute to improve the overall accuracy of the project, so will changing the dataset affect more or by using some other CNN module parameters will work, or instead can we make use of caffe module directly instead of tensorflow as backend,, so let me know, it will be fun to contribute this repo.
Thanks & Regards,
Swap
Hello @swapgit I am happy to hear that you like the project. In order to improve the emotion classification module we could try several things:
- pre-train or train along another emotion dataset (I have tried to pre-train with the KDEF dataset but it didn't show any perceivable increase in accuracy).
- The labels are not uniformly distributed consequently we could try to re-train with the existent dataset using a weighted loss.
Thank you! this repo is a miracle :)
[EDITED] The using the initial settings the preloaded weights would yield a lower performance, however it can be later trained to the reported 66%
How about using balanced training batches (so each batch will contain same amount of samples from each class)? I know it would represent a different class distribution, where the deviation of the small sets would be narrow - for me it worked better than class weights.
Hello @csbotos I am happy to hear you like the project :). Yes, we can also try to have balanced batches. The drop in accuracy could be related to not using the correct optimizer weights. I encountered an issue in keras in which the optimizer weights were not compatible between keras versions; therefore, I either deleted them entirely from the hdf5 files or I set the compile flag to False when loading the models.
Yeah, I just discovered that the learning rate could be too raw for the pretrained network, now the algorithm topped again at 66%