/Intermediate-linguistic-task-fine-tuning-on-Multilingual-models

The main goal of the thesis work was twofold, to shed light on the ongoing performances and quality issues of modern NLP approaches when applied to multi-language data scarce domains, and to propose an approach to mitigate such problems, trying to improve the results generated by the baseline models. The proposed approach, named Intermediate Task Training, consists in intro- ducing a supplementary training step in the training pipeline of models, by using tasks that are not strictly related to the target task.

Primary LanguageJupyter Notebook

Watchers