We have modified the scripts slightly to simply log results in results.txt
to automatically get the results from multiple runs. We have otherwise made no changes to the training and testing regime. We run the scripts per language 10 times using our bash script:
chmod +x ./script.sh
./script.sh -c 0 -r 10 -d /path/to/semeval/root
(in reality, we split each language into its own script to run them in parallel). We then compiled those in one CSV file per language and put these CSVs in ./lang-test-results
.
Disclaimer: We have also changed the requirements slightly to work with Python 3.7.4 and Pytorch 1.10.1. For the reported averages and deviations, we exclude results 0 and 7 from the spanish CSV because the results were particularly subpar.
Original README ressumes:
Source code for the paper "SpanEmo: Casting Multi-label Emotion Classification as Span-prediction" in EACL2021.
We used Python=3.6, torch=1.2.0. Other packages can be installed via:
pip install -r requirements.txt
The model was trained on an Nvidia GeForce GTX1080 with 11GB memory, Ubuntu 18.10.
You first need to download the dataset Link for the language of your choice (i.e., English, Arabic or Spanish) and then place them in the data directory data/
.
Next, run the main script to do the followings:
- data loading and preprocessing
- model creation and training
python scripts/train.py --train-path {} --dev-path {}
Options:
-h --help show this screen
--loss-type=<str> which loss to use cross-ent|corr|joint. [default: cross-entropy]
--max-length=<int> text length [default: 128]
--output-dropout=<float> prob of dropout applied to the output layer [default: 0.1]
--seed=<int> fixed random seed number [default: 42]
--train-batch-size=<int> batch size [default: 32]
--eval-batch-size=<int> batch size [default: 32]
--max-epoch=<int> max epoch [default: 20]
--ffn-lr=<float> ffn learning rate [default: 0.001]
--bert-lr=<float> bert learning rate [default: 2e-5]
--lang=<str> language choice [default: English]
--dev-path=<str> file path of the dev set [default: '']
--train-path=<str> file path of the train set [default: '']
--alpha-loss=<float> weight used to balance the loss [default: 0.2]
Once the above step is done, you can then evaluate on the test set using the trained model:
python scripts/test.py --test-path {} --model-path {}
Options:
-h --help show this screen
--model-path=<str> path of the trained model
--max-length=<int> text length [default: 128]
--seed=<int> seed [default: 0]
--test-batch-size=<int> batch size [default: 32]
--lang=<str> language choice [default: English]
--test-path=<str> file path of the test set [default: ]
Please cite the following paper if you found it useful. Thanks:)
@inproceedings{alhuzali-ananiadou-2021-spanemo,
title = "{S}pan{E}mo: Casting Multi-label Emotion Classification as Span-prediction",
author = "Alhuzali, Hassan and
Ananiadou, Sophia",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.eacl-main.135",
pages = "1573--1584",
}