aleflabo/MoCoDAD

Inconsistent accuracy

Closed this issue · 5 comments

Hi,thank you for your open source of your great work!
I trained the MoCoDAD model on the Avenue dataset using the readme file, and then run the Evaluation ,The accuracy is 86.3%.
But I use the pretrained models and then run the Evaluation ,The accuracy can be 89%. I trained the model according to the default parameters in the mocodad_train.yaml file. So I am not sure what the problem is.
I see that the "use_hr" parameter in the mocodad_train.yaml file is false. Should I set this parameter to True during the training process?
I hope you can take some time to answer my question. Thank you very much!

Hi! Thank you for your interest in our work and I apologize for the late reply.

The Avenue dataset doesn't have a validation split to monitor the metric, so the best results can be obtained by either monitoring the training loss or validating on the UBnormal's validation set. Unfortunately, the best performance on UBnormal doesn't necessarily correlate with the best performance on Avenue; the rule of thumb is to stop after a few training epochs, typically 3-5.

To use UBnormal's validation set, you can simply link the corresponding folder within the Avenue's folder by issuing the following command:

# Make sure your working directory is "MoCoDAD"
ln -s "$(pwd)/data/UBnormal/validating" "$(pwd)/data/HR-Avenue"

I'll update the repository with the configuration that produces 88.51 AUC.

I see that the "use_hr" parameter in the mocodad_train.yaml file is false. Should I set this parameter to True during the training process?

The use_hr parameter is only considered when doing inference on the UBnormal's validation and test splits, hence it has no effect when working with the Avenue dataset.

I'm available for any further clarification.

Stefano

Thank you for your reply, but I still don't quite understand. What you mean is that the parameter with the smallest training error on the Avenue dataset may not necessarily achieve the optimal accuracy in the paper on the test set? That is to say, because there is no validation set, only by running "Python train-MoCoDAD. py -- config config/[Avenue/UBnormal/STC]/{configname}. yaml" and "python eval MoCoDAD. py -- config/args. exp_dir/args. dataset_choice/args. dir. name/configuration. yaml", unable to achieve optimal accuracy on the Avenue dataset?
Another question is, the UBnormal dataset contains a validation set. Can we train and test according to the steps in README to directly obtain the optimal accuracy in the paper? But I trained and tested the UBnormal dataset according to README, and the results were inconsistent with the AUC results in the paper.
Meanwhile, I also found that the learning rate mentioned in the paper is 0.0001, but the learning rate in the mocodad_train.yaml file in Avenue and UBnormal is set to 0.001. I am not sure if this is an issue.
I hope you can take the time to answer the above questions. Thank you very much!

Hi, sorry for the late and incomplete reply, but we've been quite busy with other commitments lately.

What you mean is that the parameter with the smallest training error on the Avenue dataset may not necessarily achieve the optimal accuracy in the paper on the test set?

You're right, lower training loss may not imply higher test performance; our intuition is that the model either starts to generalize to abnormal data as well, even if trained on normal sequences only, or excessively overfits the training normal sequences, hence yielding higher reconstruction error on normal sequences in the test set. As explained in the previous message, the rule of thumb is to stop the training after a few epochs or to monitor the validation performance on the UBnormal validation set, although with a significant domain gap.

The UBnormal dataset contains a validation set. Can we train and test according to the steps in README to directly obtain the optimal accuracy in the paper?

Yes, we are aware that, after refactoring the code, there is a small gap with the reported performance after training from scratch, we are working to solve this issue. Anyway, the checkpoints we provide together with the code reach the performance in the paper.

I also found that the learning rate mentioned in the paper is 0.0001, but the learning rate in the mocodad_train.yaml file in Avenue and UBnormal is set to 0.001. I am not sure if this is an issue.

Thank you for reporting this mismatch, that is a typo in the supplementary material.

I'll write as soon as possible to provide further clarification on the performance when training from scratch.

I wish you a good day,

Stefano

Thank you for your reply. By the way, may I ask how to obtain the results of the paper on the HR-STC and UBnormal datasets? I alse don't undestand the parameter "pad_size" in config.yaml, which corresponds to the "pad_scores" function in the code. I hope you can explain its purpose. Thank you very much