yangheng95/LCF-ATEPC

BERT-SPC model.

Ulitochka opened this issue · 4 comments

In original paper https://arxiv.org/pdf/1902.09314.pdf results of BERT-SPC model on restaurants (senteval2014, subtask 2) are: acc=0.8446, f1_marco=0.7698.

But in your work I see results: acc=85.54. f1_macro=79.19.

Can you explain this, please?

Hello, in fact, in this original paper https://arxiv.org/pdf/1902.09314 of the BERT-SPC model, the potential of BERT-SPC was not fully unleashed, and the author improved the codes and achieved higher scores than in the paper. The author's latest code can be found in ABSA-PyTorch. Besides, our codes are not based on ABSA-PyTorch, and the experiment settings are different. All the experimental results in this paper are based on this repository and have nothing to do with the results previously reported. It is recommended to run the training script under different hyperparameters to explore the potential of the BERT-SPC model.

But where is the BERT-SPC model code in this repo?

And how you adapt this model for multitask task?

The BERT-SPC model is embedded in the lcf-atepc.py, and set "local_context_foucs" to "None" in the exp-batch.json and the model is reduced to BERT-SPC (see line 142 - 179).

Thanks!