/multilingual-question-answering

A transformer-based masked language model (XLM-RoBERTa) trained on a limited set of data, which resulted in significant performance gains for underrepresented languages like Hindi and Tamil. Incorporated K-Fold Cross-Validation and ensemble techniques to improve accuracy.

Primary LanguageJupyter Notebook

Watchers