dapowan/LIMU-BERT-Public

About classifier_bert frozen

Closed this issue · 6 comments

Hi, there's another question about classifier_bert frozen.
In the paper and documents there's no say about classifier_bert frozen, right?
Is it just because of the two split phases?
When training the classifier, the embeddings inputs into the model are generated by the LIMU-Bert, and could not update the parameters, so it is equal to freezing the model, right?
Thanks a lot!

You say “The classifier_bert.py is to train the classifier by freezing the encoder. The command is the same.”, what's the different between classifier_bert.py and classifier.py? I forget the difference between them, even though I read the codes, thanks!

Sorry, I put it in the wrong way. The classifier.py is to train the classifier with embeddings generated by LIMU-BERT, which is equivalent to training the classifier by freezing the encoder (LIMU-BERT). The classifier_bert.py is to train the classifier with LIMU-BERT together, whose parameters can be fine-tuned. The original paper adopts the classifier.py. The classifier_bert.py is for researchers who want to train LIMU-BERT and their down-steam models simultaneously. They can easily adapt classifier_bert.py to their own needs. Hope this clarifies your questions.

Yes, it is so clear!
I think maybe the less data makes the classifier_bert model get a weak result than the classifier model, and when training a GRU model it needs less data than training LIMU-BERT+GRU model, so it gets a better result in freezing the encoder of the classifier model. In my test when using 1% HHAR data, the gap between the two models is about 5%, the freezing model is better. But in theory, if data is enough the no freezing should be better, right?
Thanks!

Yes, such a small amount of data might not be able to support the training of both the encoder and classifier.

Yes, the great embedding results are the premise of later classification with fewer parametric models.
Thanks a lot, I will build on this work and do some follow-up work!

Thanks for your interest again. Please remember to cite our paper. :-)