nlp-uoregon/Okapi

Instruction finetuning for Multilingual Tasks

Vikr-182 opened this issue · 0 comments

Hi!
Thank you for your awesome work!
I had a few doubts:

  • I understand you have finetuned on all languages separately: https://huggingface.co/uonlp. I was curious if you had attempted to finetune for all languages simultaneously for improved multi-lingual understanding. And if you had a single model which would work across all the high, medium and low resource languages.
  • For the base LLM, you seem to have used BLOOM and LLama-7B on which you are applying an Instruction fine-tuning techniques (like Supervised fine-tuning (SFT)) on the 3 datasets - ARC, HellaSwag and MMLU. Did not use Llama-2-chat which is already fine-tuned via SFT for a bigger corpus of (instruction, input, output) pairs.

Please do let me know the same if there is a gap in my understanding! Thanks again

Vikrant Dewangan