[Context-Aware Dynamic Word Embeddings For Aspect Term Extraction](submitted to IEEE Transactions on Affective Computing and Affective Language Resources).
[Laptop] [Restaurant 16]:
- pytorch=1.3.1
- python=3.7.5
- transformers=2.3.0
- dgl=0.5
Download official datasets and official evaluation scripts. We assume the following file names. SemEval 2014 Laptop (http://alt.qcri.org/semeval2014/task4/):
semeval/Laptops_Test_Data_PhaseA.xml
semevalLaptops_Test_Gold.xml
semeval/eval.jar
SemEval 2016 Restaurant (http://alt.qcri.org/semeval2016/task5/)
semeval/EN_REST_SB1_TEST.xml.A
semeval/EN_REST_SB1_TEST.xml.gold
semeval/A.jar
pre-trained embedding [data]
Train:
python train_laptop.py
python train_res.py
Evaluate:
python evaluation_laptop.py [checkpoints]
python evaluation_res.py [checkpoints]
Kindly note that for reproducing the results of the following baselines, we use anaconda to create a new environment for each paper following the corresponding readme of their codes.
- DE-CNN [paper] [code] [checkpoints]
- Seq4Seq [paper] [code] [checkpoints]
- MT-TSMSA [paper] [code] [checkpoints]
- CL-BERT [paper] [code] [checkpoints]
Beside, we also modify the CL-BERT model, i.e., we add domain embedding to the representation of words. The code is in the [CL-BERT-new]
Step 1: Download datasets and pre-trained model weight from [code], and place these pre-trained model weight files as:
bert-pt/bert-laptop/ bert-pt/bert-rest/
Step 2: Train and evaluate:
python main.py