av-savchenko/face-emotion-recognition

Provide the setting for ENetB0

ltkhang opened this issue · 8 comments

Hi,

Could you please share the setting to train ENetB0 to reach the accuracy of 61.32 in the report?

I can only re-produce about 57%
Capture

I used exactly the same source code as in this repository. For sure, I ran the training process several times, but in all cases the maximal validation accuracy was higher than 60% for 8 classes. Please verify that your dataset is correct by running my models on your train and validation sets. Moreover, please check that you loaded the model pre-trained on face identification (state_vggface2_enet0_new or something similar) before training on AffectNet

I used exactly the same source code as in this repository. For sure, I ran the training process several times, but in all cases the maximal validation accuracy was higher than 60% for 8 classes. Please verify that your dataset is correct by running my models on your train and validation sets. Moreover, please check that you loaded the model pre-trained on face identification (state_vggface2_enet0_new or something similar) before training on AffectNet

The unaligned face that you only simply crop the face with "img=img[y:y+h,x:x+w]" isn't it?

Yes, the preprocessing is the same as in train_emotions.ipynb. If you got the accuracy reported in README, the preprocessing is ok, so that it is necessary to check if the correct pre-trained model was used

Yes, the preprocessing is the same as in train_emotions.ipynb. If you got the accuracy reported in README, the preprocessing is ok, so that it is necessary to check if the correct pre-trained model was used

I have validated my preprocessing image with your pre-trained model and the results are the same.

May I ask how did you call the method train for "EnetB0"?

I am using
#2 lines set grad for feature extract
train(model, 3, 0.001, True)

fine-tune

train(model, 6, 1e-4, True)

But I can not get the the result as good as yours.

My bad, I found a small typo in the RobustOptimizer. Just fixed it, please re-run your training script and check if it works now for you

My bad, I found a small typo in the RobustOptimizer. Just fixed it, please re-run your training script and check if it works now for you

The code in RobustOptimizer seems not to changed the main logic.
Btw, the best result I can achieve is 61.11
May I ask that you need to run the training several times to reach the best result or something?
image

The code in RobustOptimizer has small change, but it is very important. Eps parameter should be renamed, otherwise it is assigned to eps parameter of optimizer. I'm not sure if it is possible to get 60+% for the previous version of a code, but all runs of the latest code leads to 60+% accuracy. And when I ran the previous incorrect version of RobustOptimizer 3 times, I usually got less than 59%. BTW, I uploaded the new version of train_emotions_pytorch to present the results of a training

The code in RobustOptimizer has small change, but it is very important. Eps parameter should be renamed, otherwise it is assigned to eps parameter of optimizer. I'm not sure if it is possible to get 60+% for the previous version of a code, but all runs of the latest code leads to 60+% accuracy. And when I ran the previous incorrect version of RobustOptimizer 3 times, I usually got less than 59%. BTW, I uploaded the new version of train_emotions_pytorch to present the results of a training

Thank you, I got it