facebookresearch/MLQA

Fine-tuning the XLM-R on the dev set of each language

nooralahzadeh opened this issue · 0 comments

Hi,
Have you tried to fine-tune the XLM-R model after pre-trained on English on the dev set of other languages (few-shot learning) and then evaluate on its test set?
The strange thing is that the performance on XLM-R is lower in few-shot learning compare to the zero-shot setting.

Thanks