maxstrobel/HCN-PrototypeLoss-PyTorch

Vanilla HCN accuracy

Closed this issue · 3 comments

Thanks for the neat implementation.
I've run the provided code but couldn't get the reported accuracy, 88.12%. I got 81.1%.
Have you applied any other data post-processing for NTU-D, such as normalization, other than the code (interpolation, set hip to zero)?
Have you seen a result similar to mine by any chance from your previous experiments? If so, would you give me a hint to train the network properly?

Hi @sangwoo3,

Did you change something in the code or just clone it and run it?
I will check the issue and result deviations I find.
However, I am little busy these days, so maybe it takes a week.

Best Max

Hi @sangwoo3 ,

I rerun the experiments from scratch and came to results very similar to my previous ones.
I updated the README and added setups for cross-view and cross-subject. Maybe you run the experiments on cross-subject?

Vanilla HCN @ 200 Epochs Cross-View Cross-Subject
Accuracy 89.5 % 83.8 %
Top-2 accuracy 95.65 % 91.7 %
Top-5 accuracy 98.79 % 97.2 %

I will later add updated experiments for Prototype-HCN.

If you can provide more details, maybe we can find your issue.

Best,
Max

Since there is no response, I'll close the issue. We can reopen it if it's necessary.